sensors-logo

Journal Browser

Journal Browser

AI-Enabled Sensing Technology and Data Analysis Techniques for Intelligent Human-Computer Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: 20 September 2024 | Viewed by 20052

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, 2000 Maribor, Slovenia
Interests: computer–human interaction; user experience; IoT; web technology; intelligent user interfaces; accessibility; technologies for touchless HCI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, 2000 Maribor, Slovenia
Interests: empirical research methods; operations research; behavioral operations research; sensor-based process optimization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the last few years, the concept of user experience (UX) has been revolutionized by artificial intelligence (AI). The convergence of user interfaces (UI) and AI has resulted in intelligent user interfaces (IUI) with enhanced flexibility, usability, and interaction. The term IUI is not new. However, with the explosion of the Internet of Things (IoT) devices surrounding the users, new sources of valuable information about users’ behavior, interests, and preferences have emerged. Innovations in AI and machine learning technology can leverage such information to develop IUIs, providing increased productivity, improved efficiency, effectiveness, and naturalness of interaction. AI can be used to process the data generated during the user’s interaction with the computer through smart sensing technology to recognize HCI patterns and make small changes to the UI to account for these. Such small changes will, over time, deliver the ideal user interface adapted to the individual’s needs and preferences.

The Special Issue is dedicated to new advances in developing innovative solutions for intelligent human–computer interaction and its applications in daily life. The key aim of this Special Issue is to bring together state-of-the-art research and innovative human–computer interaction solutions for smart user interfaces, in addition to discoveries, new ideas, and innovative improvements.

Special Issue topics include but are not limited to:

  • IoT technology integration in intelligent user interfaces (IUIs)
  • Human–computer interaction (HCI) in IUIs
  • HCI pattern recognition
  • Modeling and analysis of sensors for HCI pattern recognition
  • User experience (UX) in IUIs
  • Designing and evaluating IUIs
  • Design patterns for IUIs
  • Architectures of IUIs
  • IUIs for older users
  • IUIs for users with disabilities
  • IUIs for mobile applications
  • IUIs in e-learning
  • IoT-aided digital assistants in HCI
  • Non-invasive brain–computer interaction
  • Knowledge engineering in IUIs
  • Modeling and evaluating effects of IUIs on HCI
  • Machine-learning-based and Artificial Intelligence (AI)-oriented technologies applied to user interfaces
  • Novel AI techniques for the processing of combined behavioral data (e.g., gestural data, physiological data, facial recognition, deep learning, sensor readings, etc.) for improving UX in IUIs
  • Artificial Intelligence of Things for IUIs
  • Usability testing in IUIs
  • Acceptance and use of IUIs
  • Trust, confidence, reliance, and privacy in IUIs
  • Internet of Behavior (IoB)
  • IoT-assisted computationally intelligent methods for HCI pattern recognition
  • Cloud computing for HCI pattern recognition
  • Deep learning for HCI pattern recognition

Dr. Boštjan Šumak
Dr. Maja Pušnik
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Human–computer interaction (HCI)
  • Internet of Things (IoT)
  • Internet of Behaviors (IoB)
  • Intelligent sensors for HCI
  • Artificial Intelligence (AI) for HCI
  • HCI sensor data analytics
  • Intelligent user interfaces (IUI)
  • Design patterns for intelligent user interfaces
  • User interaction pattern recognition
  • User experience (UX)

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 1207 KiB  
Article
From Movements to Metrics: Evaluating Explainable AI Methods in Skeleton-Based Human Activity Recognition
by Kimji N. Pellano, Inga Strümke and Espen A. F. Ihlen
Sensors 2024, 24(6), 1940; https://0-doi-org.brum.beds.ac.uk/10.3390/s24061940 - 18 Mar 2024
Viewed by 576
Abstract
The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human–computer interaction. This paper tackles a well-known gap in the field, which is the lack of testing in the applicability [...] Read more.
The advancement of deep learning in human activity recognition (HAR) using 3D skeleton data is critical for applications in healthcare, security, sports, and human–computer interaction. This paper tackles a well-known gap in the field, which is the lack of testing in the applicability and reliability of XAI evaluation metrics in the skeleton-based HAR domain. We have tested established XAI metrics, namely faithfulness and stability on Class Activation Mapping (CAM) and Gradient-weighted Class Activation Mapping (Grad-CAM) to address this problem. This study introduces a perturbation method that produces variations within the error tolerance of motion sensor tracking, ensuring the resultant skeletal data points remain within the plausible output range of human movement as captured by the tracking device. We used the NTU RGB+D 60 dataset and the EfficientGCN architecture for HAR model training and testing. The evaluation involved systematically perturbing the 3D skeleton data by applying controlled displacements at different magnitudes to assess the impact on XAI metric performance across multiple action classes. Our findings reveal that faithfulness may not consistently serve as a reliable metric across all classes for the EfficientGCN model, indicating its limited applicability in certain contexts. In contrast, stability proves to be a more robust metric, showing dependability across different perturbation magnitudes. Additionally, CAM and Grad-CAM yielded almost identical explanations, leading to closely similar metric outcomes. This suggests a need for the exploration of additional metrics and the application of more diverse XAI methods to broaden the understanding and effectiveness of XAI in skeleton-based HAR. Full article
Show Figures

Figure 1

12 pages, 3117 KiB  
Communication
Virtual Scenarios of Earthquake Early Warning to Disaster Management in Smart Cities Based on Auxiliary Classifier Generative Adversarial Networks
by Jae-Kwang Ahn, Byeonghak Kim, Bonhwa Ku and Eui-Hong Hwang
Sensors 2023, 23(22), 9209; https://0-doi-org.brum.beds.ac.uk/10.3390/s23229209 - 16 Nov 2023
Viewed by 767
Abstract
Effective response strategies to earthquake disasters are crucial for disaster management in smart cities. However, in regions where earthquakes do not occur frequently, model construction may be difficult due to a lack of training data. To address this issue, there is a need [...] Read more.
Effective response strategies to earthquake disasters are crucial for disaster management in smart cities. However, in regions where earthquakes do not occur frequently, model construction may be difficult due to a lack of training data. To address this issue, there is a need for technology that can generate earthquake scenarios for response training at any location. We proposed a model for generating earthquake scenarios using an auxiliary classifier Generative Adversarial Network (AC-GAN)-based data synthesis. The proposed ACGAN model generates various earthquake scenarios by incorporating an auxiliary classifier learning process into the discriminator of GAN. Our results at borehole sensors showed that the seismic data generated by the proposed model had similar characteristics to actual data. To further validate our results, we compared the generated IM (such as PGA, PGV, and SA) with Ground Motion Prediction Equations (GMPE). Furthermore, we evaluated the potential of using the generated scenarios for earthquake early warning training. The proposed model and algorithm have significant potential in advancing seismic analysis and detection management systems, and also contribute to disaster management. Full article
Show Figures

Figure 1

14 pages, 4552 KiB  
Article
UX Framework Including Imbalanced UX Dataset Reduction Method for Analyzing Interaction Trends of Agent Systems
by Bonwoo Gu and Yunsick Sung
Sensors 2023, 23(3), 1651; https://0-doi-org.brum.beds.ac.uk/10.3390/s23031651 - 02 Feb 2023
Viewed by 1428
Abstract
The performance of game AI can significantly impact the purchase decisions of users. User experience (UX) technology can evaluate user satisfaction with game AI by analyzing user interaction input through a user interface (UI). Although traditional UX-based game agent systems use a UX [...] Read more.
The performance of game AI can significantly impact the purchase decisions of users. User experience (UX) technology can evaluate user satisfaction with game AI by analyzing user interaction input through a user interface (UI). Although traditional UX-based game agent systems use a UX evaluation to identify the common interaction trends of multiple users, there is a limit to evaluating UX data, i.e., creating a UX evaluation and identifying the interaction trend for each individual user. The loss of UX data features for each user should be minimized and reflected to provide a personalized game agent system for each user. This paper proposes a UX framework for game agent systems in which a UX data reduction method is applied to improve the interaction for each user. The proposed UX framework maintains non-trend data features in the UX dataset where overfitting occurs to provide a personalized game agent system for each user, achieved by minimizing the loss of UX data features for each user. The proposed UX framework is applied to a game called “Freestyle” to verify its performance. By using the proposed UX framework, the imbalanced UX dataset of the Freestyle game minimizes overfitting and becomes a UX dataset that reflects the interaction trend of each user. The UX dataset generated from the proposed UX framework is used to provide customized game agents of each user to enhanced interaction. Furthermore, the proposed UX framework is expected to contribute to the research on UX-based personalized services. Full article
Show Figures

Figure 1

36 pages, 4662 KiB  
Article
Intelligent User Interfaces and Their Evaluation: A Systematic Mapping Study
by Saša Brdnik, Tjaša Heričko and Boštjan Šumak
Sensors 2022, 22(15), 5830; https://0-doi-org.brum.beds.ac.uk/10.3390/s22155830 - 04 Aug 2022
Cited by 7 | Viewed by 3007
Abstract
Intelligent user interfaces (IUI) are driven by the goal of improvement in human–computer interaction (HCI), mainly improving user interfaces’ user experience (UX) or usability with the help of artificial intelligence. The main goal of this study is to find, assess, and synthesize existing [...] Read more.
Intelligent user interfaces (IUI) are driven by the goal of improvement in human–computer interaction (HCI), mainly improving user interfaces’ user experience (UX) or usability with the help of artificial intelligence. The main goal of this study is to find, assess, and synthesize existing state-of-the-art work in the field of IUI with an additional focus on the evaluation of IUI. This study analyzed 211 studies published in the field between 2012 and 2022. Studies are most frequently tied to HCI and SE domains. Definitions of IUI were observed, showing that adaptation, representation, and intelligence are key characteristics associated with IUIs, whereas adaptation, reasoning, and representation are the most commonly used verbs in their description. Evaluation of IUI is mainly conducted with experiments and questionnaires, though usability and UX are not considered together in evaluations. Most evaluations (81% of studies) reported partial or complete improvement in usability or UX. A shortage of evaluation tools, methods, and metrics, tailored for IUI, is noticed. Most often, empirical data collection methods and data sources in IUI evaluation studies are experiment, prototype development, and questionnaire. Full article
Show Figures

Figure 1

18 pages, 4774 KiB  
Article
Indoor Localization Method of Personnel Movement Based on Non-Contact Electrostatic Potential Measurements
by Menghua Man, Yongqiang Zhang, Guilei Ma, Ziqiang Zhang and Ming Wei
Sensors 2022, 22(13), 4698; https://0-doi-org.brum.beds.ac.uk/10.3390/s22134698 - 22 Jun 2022
Cited by 3 | Viewed by 1463
Abstract
The indoor localization of people is the key to realizing “smart city” applications, such as smart homes, elderly care, and an energy-saving grid. The localization method based on electrostatic information is a passive label-free localization technique with a better balance of localization accuracy, [...] Read more.
The indoor localization of people is the key to realizing “smart city” applications, such as smart homes, elderly care, and an energy-saving grid. The localization method based on electrostatic information is a passive label-free localization technique with a better balance of localization accuracy, system power consumption, privacy protection, and environmental friendliness. However, the physical information of each actual application scenario is different, resulting in the transfer function from the human electrostatic potential to the sensor signal not being unique, thus limiting the generality of this method. Therefore, this study proposed an indoor localization method based on on-site measured electrostatic signals and symbolic regression machine learning algorithms. A remote, non-contact human electrostatic potential sensor was designed and implemented, and a prototype test system was built. Indoor localization of moving people was achieved in a 5 m × 5 m space with an 80% positioning accuracy and a median error absolute value range of 0.4–0.6 m. This method achieved on-site calibration without requiring physical information about the actual scene. It has the advantages of low computational complexity and only a small amount of training data is required. Full article
Show Figures

Figure 1

40 pages, 8891 KiB  
Article
Sensors and Artificial Intelligence Methods and Algorithms for Human–Computer Intelligent Interaction: A Systematic Mapping Study
by Boštjan Šumak, Saša Brdnik and Maja Pušnik
Sensors 2022, 22(1), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010020 - 21 Dec 2021
Cited by 17 | Viewed by 7661
Abstract
To equip computers with human communication skills and to enable natural interaction between the computer and a human, intelligent solutions are required based on artificial intelligence (AI) methods, algorithms, and sensor technology. This study aimed at identifying and analyzing the state-of-the-art AI methods [...] Read more.
To equip computers with human communication skills and to enable natural interaction between the computer and a human, intelligent solutions are required based on artificial intelligence (AI) methods, algorithms, and sensor technology. This study aimed at identifying and analyzing the state-of-the-art AI methods and algorithms and sensors technology in existing human–computer intelligent interaction (HCII) research to explore trends in HCII research, categorize existing evidence, and identify potential directions for future research. We conduct a systematic mapping study of the HCII body of research. Four hundred fifty-four studies published in various journals and conferences between 2010 and 2021 were identified and analyzed. Studies in the HCII and IUI fields have primarily been focused on intelligent recognition of emotion, gestures, and facial expressions using sensors technology, such as the camera, EEG, Kinect, wearable sensors, eye tracker, gyroscope, and others. Researchers most often apply deep-learning and instance-based AI methods and algorithms. The support sector machine (SVM) is the most widely used algorithm for various kinds of recognition, primarily an emotion, facial expression, and gesture. The convolutional neural network (CNN) is the often-used deep-learning algorithm for emotion recognition, facial recognition, and gesture recognition solutions. Full article
Show Figures

Figure 1

17 pages, 3283 KiB  
Article
Level-K Classification from EEG Signals Using Transfer Learning
by Dor Mizrahi, Inon Zuckerman and Ilan Laufer
Sensors 2021, 21(23), 7908; https://0-doi-org.brum.beds.ac.uk/10.3390/s21237908 - 27 Nov 2021
Cited by 10 | Viewed by 1503
Abstract
Tacit coordination games are games in which communication between the players is not allowed or not possible. In these games, the more salient solutions, that are often perceived as more prominent, are referred to as focal points. The level-k model states that players’ [...] Read more.
Tacit coordination games are games in which communication between the players is not allowed or not possible. In these games, the more salient solutions, that are often perceived as more prominent, are referred to as focal points. The level-k model states that players’ decisions in tacit coordination games are a consequence of applying different decision rules at different depths of reasoning (level-k). A player at Lk=0 will randomly pick a solution, whereas a Lk1 player will apply their strategy based on their beliefs regarding the actions of the other players. The goal of this study was to examine, for the first time, the neural correlates of different reasoning levels in tacit coordination games. To that end, we have designed a combined behavioral-electrophysiological study with 3 different conditions, each resembling a different depth reasoning state: (1) resting state, (2) picking, and (3) coordination. By utilizing transfer learning and deep learning, we were able to achieve a precision of almost 100% (99.49%) for the resting-state condition, while for the picking and coordination conditions, the precision was 69.53% and 72.44%, respectively. The application of these findings and related future research options are discussed. Full article
Show Figures

Figure 1

17 pages, 7775 KiB  
Article
Learning Macromanagement in Starcraft by Deep Reinforcement Learning
by Wenzhen Huang, Qiyue Yin, Junge Zhang and Kaiqi Huang
Sensors 2021, 21(10), 3332; https://0-doi-org.brum.beds.ac.uk/10.3390/s21103332 - 11 May 2021
Cited by 2 | Viewed by 2285
Abstract
StarCraft is a real-time strategy game that provides a complex environment for AI research. Macromanagement, i.e., selecting appropriate units to build depending on the current state, is one of the most important problems in this game. To reduce the requirements for expert knowledge [...] Read more.
StarCraft is a real-time strategy game that provides a complex environment for AI research. Macromanagement, i.e., selecting appropriate units to build depending on the current state, is one of the most important problems in this game. To reduce the requirements for expert knowledge and enhance the coordination of the systematic bot, we select reinforcement learning (RL) to tackle the problem of macromanagement. We propose a novel deep RL method, Mean Asynchronous Advantage Actor-Critic (MA3C), which computes the approximate expected policy gradient instead of the gradient of sampled action to reduce the variance of the gradient, and encode the history queue with recurrent neural network to tackle the problem of imperfect information. The experimental results show that MA3C achieves a very high rate of winning, approximately 90%, against the weaker opponents and it improves the win rate about 30% against the stronger opponents. We also propose a novel method to visualize and interpret the policy learned by MA3C. Combined with the visualized results and the snapshots of games, we find that the learned macromanagement not only adapts to the game rules and the policy of the opponent bot, but also cooperates well with the other modules of MA3C-Bot. Full article
Show Figures

Figure 1

Back to TopTop