Topic Editors

Department of Management and Production Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
Dr. Andrea Luigi Guerra
Costech Laboratory, Compiègne University of Technology, 60200 Compiègne, France
Department of Mechanical and Industrial Engineering, University of Brescia, Brescia, Italy
Department of Civil and Mechanical Engineering, University of Cassino and Lazio Meridionale, Cassino, Italy
Department of Management and Production Engineering, Polytechnic of Turin, 10129 Turin, Italy

Human–Machine Interaction

Abstract submission deadline
closed (31 December 2022)
Manuscript submission deadline
closed (31 March 2023)
Viewed by
106425

Topic Information

Dear Colleagues,Since the computer advent, to constantly improve the communication between human operators and automated systems, several studies and research in the domain of Human-Machine Interaction (HMI) have been conducted. The interest in HMI, that is also known as Man Machine Interaction/Interface, has gown in several fields, from medicine to entertainment, but also in education and in the industrial domain as well, seeing the opportunity to impact the performance of all the stakeholders involved in the manufacturing process. Moreover, HMI is boosting the transition to Industry 4.0, fostering the spread of new technologies to increase efficiency and product quality.

The human component of HMI requires the engagement of behavioral sciences to study users’ behavior and to identify their peculiarities in terms of habits and ease of interaction with a intelligent machines. In this sense, HMI falls within the user-centered design (UCD) theory, according to which a full understanding of tasks, environments and users is essential in the product design and development. These processes should focus on users and their feedback should be constantly assessed for subsequent refinements to achieve the final result. For instance, monitoring users’ emotional response through the adoption of different methodologies, such as electroencephalogram (EEG) and face expression recognition (FER), turned out to be very effective to retrieve a truthful evaluation of the proposed solution. On the other hand, the technological component of HMI can benefit of the most recent development in the field of RGB-D cameras and smartphone devices, which have allowed the possibility of creating a tangible interaction between the real and the virtual domain that could be perceived as a “window on the real world”.

All these possibilities can be further strengthened by extended reality (XR). Augmented reality (AR) and virtual reality (VR) flexibility could support their synergic usage with other technologies representing an “integration platform” for technology empowerment, for instance, in machine and deep learning (AI), which particularly in the computer vision domain may be able to provide very disruptive solutions, e.g., in the medical domain for disease diagnosis and for precise surgery assistance. The introduction of headset and smart glasses has changed the way of interacting with the environment, opening doors to mutual influence the psychical and the virtual world.

The aim of this Topic is to advance both scholarly, understanding of how to improve HMI theoretically, empirically, methodologically, and practically, integrating innovative technologies and new methodologies to impact the society through the design of cutting-edge frameworks according to the principles of UCD.

Prof. Dr. Enrico Vezzetti
Dr. Andrea Luigi Guerra
Dr. Gabriele Baronio
Dr. Domenico Speranza
Dr. Luca Ulrich

Topic Editors

Keywords

  • Human-Computer Interaction (HCI)
  • Human-Machine Interaction (HMI)
  • User-Centered Design (UCD)
  • Human body modeling
  • Industry 4.0
  • Industry 5.0
  • Product design
  • Human factors and ergonomics
  • Modeling and simulation
  • RGB-D cameras
  • Behavioral sciences
  • Electroencephalogram (EEG)
  • Face Expression Recognition (FER)
  • Computer vision
  • Artificial intelligence
  • Image processing
  • Machine learning
  • Deep learning
  • Mixed reality
  • Virtual reality
  • Augmented reality
  • Robotics
  • Intelligent robotics
  • Industrial robotics
  • Human-Robot Interaction

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400
Brain Sciences
brainsci
3.3 3.9 2011 15.6 Days CHF 2200
Electronics
electronics
2.9 4.7 2012 15.6 Days CHF 2400
Sensors
sensors
3.9 6.8 2001 17 Days CHF 2600
Systems
systems
1.9 3.3 2013 16.8 Days CHF 2400

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (43 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
12 pages, 2400 KiB  
Article
Pupil Localization Algorithm Based on Improved U-Net Network
by Gongzheng Chen, Zhenghong Dong, Jue Wang and Lurui Xia
Electronics 2023, 12(12), 2591; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12122591 - 08 Jun 2023
Cited by 2 | Viewed by 787
Abstract
Accurately localizing the pupil is an essential requirement of some new human–computer interaction methods. In the past, a lot of work has been done to solve the pupil localization problem based on the appearance characteristics of the eye, but these methods are often [...] Read more.
Accurately localizing the pupil is an essential requirement of some new human–computer interaction methods. In the past, a lot of work has been done to solve the pupil localization problem based on the appearance characteristics of the eye, but these methods are often specific to the scenario. In this paper, we propose an improved U-net network to solve the pupil location problem. This network uses the attention mechanism to automatically select the contribution of coded and uncoded features in the model during the skip connection stage of the U-net network in the channel and spatial axis. It can make full use of the two features of the model in the decoding stage, which is beneficial for improving the performance of the model. By comparing the sequential channel attention module and spatial attention module, average pooling and maximum pooling operations, and different attention mechanisms, the model was finally determined and validated on two public data sets, which proves the validity of the proposed model. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

19 pages, 6995 KiB  
Article
User Experience Design for Social Robots: A Case Study in Integrating Embodiment
by Ana Corrales-Paredes, Diego Ortega Sanz, María-José Terrón-López and Verónica Egido-García
Sensors 2023, 23(11), 5274; https://0-doi-org.brum.beds.ac.uk/10.3390/s23115274 - 01 Jun 2023
Cited by 2 | Viewed by 2394
Abstract
Social robotics is an emerging field with a high level of innovation. For many years, it was a concept framed in the literature and theoretical approaches. Scientific and technological advances have made it possible for robots to progressively make their way into different [...] Read more.
Social robotics is an emerging field with a high level of innovation. For many years, it was a concept framed in the literature and theoretical approaches. Scientific and technological advances have made it possible for robots to progressively make their way into different areas of our society, and now, they are ready to make the leap out of the industry and extend their presence into our daily lives. In this sense, user experience plays a fundamental role in achieving a smooth and natural interaction between robots and humans. This research focused on the user experience approach in terms of the embodiment of a robot, centring on its movements, gestures, and dialogues. The aim was to investigate how the interaction between robotic platforms and humans takes place and what differential aspects should be considered when designing the robot tasks. To achieve this objective, a qualitative and quantitative study was conducted based on a real interview between several human users and the robotic platform. The data were gathered by recording the session and having each user complete a form. The results showed that participants generally enjoyed interacting with the robot and found it engaging, which led to greater trust and satisfaction. However, delays and errors in the robot’s responses caused frustration and disconnection. The study found that incorporating embodiment into the design of the robot improved the user experience, and the robot’s personality and behaviour were significant factors. It was concluded that robotic platforms and their appearance, movements, and way of communicating have a decisive influence on the user’s opinion and the way they interact. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

23 pages, 10459 KiB  
Article
Assessing the Value of Multimodal Interfaces: A Study on Human–Machine Interaction in Weld Inspection Workstations
by Paul Chojecki, Dominykas Strazdas, David Przewozny, Niklas Gard, Detlef Runde, Niklas Hoerner, Ayoub Al-Hamadi, Peter Eisert and Sebastian Bosse
Sensors 2023, 23(11), 5043; https://0-doi-org.brum.beds.ac.uk/10.3390/s23115043 - 24 May 2023
Cited by 1 | Viewed by 1639
Abstract
Multimodal user interfaces promise natural and intuitive human–machine interactions. However, is the extra effort for the development of a complex multisensor system justified, or can users also be satisfied with only one input modality? This study investigates interactions in an industrial weld inspection [...] Read more.
Multimodal user interfaces promise natural and intuitive human–machine interactions. However, is the extra effort for the development of a complex multisensor system justified, or can users also be satisfied with only one input modality? This study investigates interactions in an industrial weld inspection workstation. Three unimodal interfaces, including spatial interaction with buttons augmented on a workpiece or a worktable, and speech commands, were tested individually and in a multimodal combination. Within the unimodal conditions, users preferred the augmented worktable, but overall, the interindividual usage of all input technologies in the multimodal condition was ranked best. Our findings indicate that the implementation and the use of multiple input modalities is valuable and that it is difficult to predict the usability of individual input modalities for complex systems. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

22 pages, 4737 KiB  
Article
Towards the Senior Resident Social Interaction System: A Case Study of Interactive Gallery
by Cun Li, Linghao Zhang, Xu Lin, Kai Kang, Jun Hu, Bart Hengeveld and Caroline Hummels
Systems 2023, 11(4), 204; https://0-doi-org.brum.beds.ac.uk/10.3390/systems11040204 - 18 Apr 2023
Viewed by 1333
Abstract
The number of older adults residing in nursing institutions is increasing, and many of them experience social isolation. The social interaction of older adults constitutes a complex system that involves multiple stakeholders, including fellow residents, caregivers, members of the local community, etc. This [...] Read more.
The number of older adults residing in nursing institutions is increasing, and many of them experience social isolation. The social interaction of older adults constitutes a complex system that involves multiple stakeholders, including fellow residents, caregivers, members of the local community, etc. This paper proposes an Interactive Gallery, comprising a cluster of scenery collectors and an interactive installation resembling a gallery. It aims to promote social interaction among nursing home residents and members of the local community, as well as between senior residents within the nursing home. We conducted a field study that employed behavior observation and semi-structured interviews. Our findings show that the Interactive Gallery had a positive impact on the social interaction of senior participants, and it also stimulated their interest in sharing their experiences with individuals outside of the nursing home. The implications of our field study are significant. We highlight the social interaction system and behavioral characteristics of senior residents, strategies for enhancing social interaction within the nursing home, and strategies for promoting social interaction between senior residents and members of the local community. The Interactive Gallery presents a novel approach to addressing the issue of social isolation among senior residents in nursing homes. Our field study findings demonstrate its potential to improve the quality of life of seniors by promoting social interaction and engagement. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

14 pages, 2259 KiB  
Article
Evaluation of EEG Oscillatory Patterns and Classification of Compound Limb Tactile Imagery
by Kishor Lakshminarayanan, Rakshit Shah, Sohail R. Daulat, Viashen Moodley, Yifei Yao, Puja Sengupta, Vadivelan Ramu and Deepa Madathil
Brain Sci. 2023, 13(4), 656; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci13040656 - 13 Apr 2023
Cited by 13 | Viewed by 1557
Abstract
Objective: The purpose of this study was to investigate the cortical activity and digit classification performance during tactile imagery (TI) of a vibratory stimulus at the index, middle, and thumb digits within the left hand in healthy individuals. Furthermore, the cortical activities [...] Read more.
Objective: The purpose of this study was to investigate the cortical activity and digit classification performance during tactile imagery (TI) of a vibratory stimulus at the index, middle, and thumb digits within the left hand in healthy individuals. Furthermore, the cortical activities and classification performance of the compound TI were compared with similar compound motor imagery (MI) with the same digits as TI in the same subjects. Methods: Twelve healthy right-handed adults with no history of upper limb injury, musculoskeletal condition, or neurological disorder participated in the study. The study evaluated the event-related desynchronization (ERD) response and brain–computer interface (BCI) classification performance on discriminating between the digits in the left-hand during the imagery of vibrotactile stimuli to either the index, middle, or thumb finger pads for TI and while performing a motor activity with the same digits for MI. A supervised machine learning technique was applied to discriminate between the digits within the same given limb for both imagery conditions. Results: Both TI and MI exhibited similar patterns of ERD in the alpha and beta bands at the index, middle, and thumb digits within the left hand. While TI had significantly lower ERD for all three digits in both bands, the classification performance of TI-based BCI (77.74 ± 6.98%) was found to be similar to the MI-based BCI (78.36 ± 5.38%). Conclusions: The results of this study suggest that compound tactile imagery can be a viable alternative to MI for BCI classification. The study contributes to the growing body of evidence supporting the use of TI in BCI applications, and future research can build on this work to explore the potential of TI-based BCI for motor rehabilitation and the control of external devices. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

16 pages, 10743 KiB  
Article
Contactless Interface Using Exhaled Breath and Thermal Imaging
by Kanghoon Lee and Jong-Il Park
Sensors 2023, 23(7), 3601; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073601 - 30 Mar 2023
Viewed by 1020
Abstract
A new type of interface using a conduction hot spot reflecting the user’s intention is presented. Conventional methods using fingertips to generate conduction hot points cannot be applied to those who have difficulty using their hands or cold hands. In order to overcome [...] Read more.
A new type of interface using a conduction hot spot reflecting the user’s intention is presented. Conventional methods using fingertips to generate conduction hot points cannot be applied to those who have difficulty using their hands or cold hands. In order to overcome this problem, an exhaling interaction using a hollow rod is proposed and extensively analyzed in this paper. A preliminary study on exhaling interaction demonstrated the possibility of the method. This paper is an attempt to develop and extend the concept and provide the necessary information for properly implementing the interaction method. We have repeatedly performed conduction hot-point-generation experiments on various materials that can replace walls or screens to make wide use of the proposed interfaces. Furthermore, a lot of experiments have been conducted in different seasons, considering that the surface temperature of objects also changes depending on the season. Based on the results of an extensive amount of experiments, we provide key observations on important factors such as material, season, and user condition, which should be considered for realizing contactless exhaling interfaces. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

19 pages, 9018 KiB  
Article
Wearable Drone Controller: Machine Learning-Based Hand Gesture Recognition and Vibrotactile Feedback
by Ji-Won Lee and Kee-Ho Yu
Sensors 2023, 23(5), 2666; https://0-doi-org.brum.beds.ac.uk/10.3390/s23052666 - 28 Feb 2023
Cited by 6 | Viewed by 5179
Abstract
We proposed a wearable drone controller with hand gesture recognition and vibrotactile feedback. The intended hand motions of the user are sensed by an inertial measurement unit (IMU) placed on the back of the hand, and the signals are analyzed and classified using [...] Read more.
We proposed a wearable drone controller with hand gesture recognition and vibrotactile feedback. The intended hand motions of the user are sensed by an inertial measurement unit (IMU) placed on the back of the hand, and the signals are analyzed and classified using machine learning models. The recognized hand gestures control the drone, and the obstacle information in the heading direction of the drone is fed back to the user by activating the vibration motor attached to the wrist. Simulation experiments for drone operation were performed, and the participants’ subjective evaluations regarding the controller’s convenience and effectiveness were investigated. Finally, experiments with a real drone were conducted and discussed to validate the proposed controller. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

12 pages, 1867 KiB  
Article
Efficiency of the Brain Network Is Associated with the Mental Workload with Developed Mental Schema
by Heng Gu, He Chen, Qunli Yao, Wenbo He, Shaodi Wang, Chao Yang, Jiaxi Li, Huapeng Liu, Xiaoli Li, Xiaochuan Zhao and Guanhao Liang
Brain Sci. 2023, 13(3), 373; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci13030373 - 21 Feb 2023
Cited by 1 | Viewed by 1273
Abstract
The study of mental workload has attracted much interest in neuroergonomics, a frontier field of research. However, there appears no consensus on how to measure mental workload effectively because the mental workload is not only regulated by task difficulty but also affected by [...] Read more.
The study of mental workload has attracted much interest in neuroergonomics, a frontier field of research. However, there appears no consensus on how to measure mental workload effectively because the mental workload is not only regulated by task difficulty but also affected by individual skill level reflected as mental schema. In this study, we investigated the alterations in the functional brain network induced by a 10-day simulated piloting task with different difficulty levels. Topological features quantifying global and local information communication and network organization were analyzed. It was found that during different tests, the global efficiency did not change, but the gravity center of the local efficiency of the network moved from the frontal to the posterior area; the small-worldness of the functional brain network became stronger. These results demonstrate the reconfiguration of the brain network during the development of mental schema. Furthermore, for the first two tests, the global and local efficiency did not have a consistent change trend under different difficulty levels, but after forming the developed mental schema, both of them decreased with the increase in task difficulty, showing sensitivity to the increase in mental workload. Our results demonstrate brain network reconfiguration during the motor learning process and reveal the importance of the developed mental schema for the accurate assessment of mental workload. We concluded that the efficiency of the brain network was associated with mental workload with developed mental schema. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

27 pages, 4230 KiB  
Article
Continuous Prediction of Web User Visual Attention on Short Span Windows Based on Gaze Data Analytics
by Francisco Diaz-Guerra and Angel Jimenez-Molina
Sensors 2023, 23(4), 2294; https://0-doi-org.brum.beds.ac.uk/10.3390/s23042294 - 18 Feb 2023
Viewed by 1397
Abstract
Understanding users’ visual attention on websites is paramount to enhance the browsing experience, such as providing emergent information or dynamically adapting Web interfaces. Existing approaches to accomplish these challenges are generally based on the computation of salience maps of static Web interfaces, while [...] Read more.
Understanding users’ visual attention on websites is paramount to enhance the browsing experience, such as providing emergent information or dynamically adapting Web interfaces. Existing approaches to accomplish these challenges are generally based on the computation of salience maps of static Web interfaces, while websites increasingly become more dynamic and interactive. This paper proposes a method and provides a proof-of-concept to predict user’s visual attention on specific regions of a website with dynamic components. This method predicts the regions of a user’s visual attention without requiring a constant recording of the current layout of the website, but rather by knowing the structure it presented in a past period. To address this challenge, the concept of visit intention is introduced in this paper, defined as the probability that a user, while browsing, will fixate their gaze on a specific region of the website in the next period. Our approach uses the gaze patterns of a population that browsed a specific website, captured via an eye-tracker device, to aid personalized prediction models built with individual visual kinetics features. We show experimentally that it is possible to conduct such a prediction through multilabel classification models using a small number of users, obtaining an average area under curve of 84.3%, and an average accuracy of 79%. Furthermore, the user’s visual kinetics features are consistently selected in every set of a cross-validation evaluation. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

17 pages, 6094 KiB  
Article
Bilateral Teleoperation System with Integrated Position/Force Impedance Control for Assembly Tasks
by Shigang Peng, Meng Yu, Xiang Cheng and Pengfei Wang
Appl. Sci. 2023, 13(4), 2568; https://0-doi-org.brum.beds.ac.uk/10.3390/app13042568 - 16 Feb 2023
Cited by 2 | Viewed by 1672
Abstract
This article investigates the realization of achieving safe and flexible assembly under manual teleoperation. A wearable positioning system for teleoperation assembly tasks was designed to provide great flexibility and operability. The 6D coordinate information of the hand was reconstructed with a wireless locator [...] Read more.
This article investigates the realization of achieving safe and flexible assembly under manual teleoperation. A wearable positioning system for teleoperation assembly tasks was designed to provide great flexibility and operability. The 6D coordinate information of the hand was reconstructed with a wireless locator in real-time, and three control methods were conducted. In contrast to the traditional impedance methods, an integrated position/force control method which takes the operator’s posture as the desired position was proposed, thus achieving the combination of the initiative of the operator and the compliance of the impedance control. Additionally, the method possesses the capacity of eliminating collision force caused by hand jitters and misoperation. Finally, the system was evaluated in a representative application of teleoperated peg-in-hole insertion. Additionally, a challenging task was tested to illustrate advantages of the proposed method. The results show that the position trailing is precise enough for a teleoperation system, and the proposed integrated position/force control method approaches outperformed position control and impedance in terms of precision and operability. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

16 pages, 3294 KiB  
Article
First Steps toward Voice User Interfaces for Web-Based Navigation of Geographic Information: A Spanish Terms Study
by Teresa Blanco, Sergio Martín-Segura, Juan López de Larrinzar, Rubén Béjar and Francisco Javier Zarazaga-Soria
Appl. Sci. 2023, 13(4), 2083; https://0-doi-org.brum.beds.ac.uk/10.3390/app13042083 - 06 Feb 2023
Cited by 1 | Viewed by 1391
Abstract
This work presents the first steps toward developing specific technology for voice user interfaces for geographic information systems. Despite having many general elements, such as voice recognition libraries, the current technology still lacks the ability to fully understand and process the semantics that [...] Read more.
This work presents the first steps toward developing specific technology for voice user interfaces for geographic information systems. Despite having many general elements, such as voice recognition libraries, the current technology still lacks the ability to fully understand and process the semantics that real users apply to command geographic information systems. This paper presents the results of three connected experiments, following a mixed-methods approach. The first experiment focused on identifying the most common words used when working with maps in a web browser. The second experiment developed an understanding of the chain of commands used for map management for a specific objective. Finally, the third experiment involved the development of a prototype to validate this understanding. Using data and fieldwork, we created a minimum corpus of terms in Spanish. In addition, we identified the particularities of use and user profiles to consider in a voice user interface for geographic information systems, involving the user’s proprioception concerning the world and technology. These user profiles can be considered in future designs of human–technology interaction products. All the data collected and the source code of the prototype are provided as additional material, free to use and modify. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

18 pages, 4406 KiB  
Article
Convolutional Neural Network with a Topographic Representation Module for EEG-Based Brain—Computer Interfaces
by Xinbin Liang, Yaru Liu, Yang Yu, Kaixuan Liu, Yadong Liu and Zongtan Zhou
Brain Sci. 2023, 13(2), 268; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci13020268 - 05 Feb 2023
Cited by 1 | Viewed by 2088
Abstract
Convolutional neural networks (CNNs) have shown great potential in the field of brain–computer interfaces (BCIs) due to their ability to directly process raw electroencephalogram (EEG) signals without artificial feature extraction. Some CNNs have achieved better classification accuracy than that of traditional methods. Raw [...] Read more.
Convolutional neural networks (CNNs) have shown great potential in the field of brain–computer interfaces (BCIs) due to their ability to directly process raw electroencephalogram (EEG) signals without artificial feature extraction. Some CNNs have achieved better classification accuracy than that of traditional methods. Raw EEG signals are usually represented as a two-dimensional (2-D) matrix composed of channels and time points, ignoring the spatial topological information of electrodes. Our goal is to make a CNN that takes raw EEG signals as inputs have the ability to learn spatial topological features and improve its classification performance while basically maintaining its original structure. We propose an EEG topographic representation module (TRM). This module consists of (1) a mapping block from raw EEG signals to a 3-D topographic map and (2) a convolution block from the topographic map to an output with the same size as the input. According to the size of the convolutional kernel used in the convolution block, we design two types of TRMs, namely TRM-(5,5) and TRM-(3,3). We embed the two TRM types into three widely used CNNs (ShallowConvNet, DeepConvNet and EEGNet) and test them on two publicly available datasets (the Emergency Braking During Simulated Driving Dataset (EBDSDD) and the High Gamma Dataset (HGD)). Results show that the classification accuracies of all three CNNs are improved on both datasets after using the TRMs. With TRM-(5,5), the average classification accuracies of DeepConvNet, EEGNet and ShallowConvNet are improved by 6.54%, 1.72% and 2.07% on the EBDSDD and by 6.05%, 3.02% and 5.14% on the HGD, respectively; with TRM-(3,3), they are improved by 7.76%, 1.71% and 2.17% on the EBDSDD and by 7.61%, 5.06% and 6.28% on the HGD, respectively. We improve the classification performance of three CNNs on both datasets through the use of TRMs, indicating that they have the capability to mine spatial topological EEG information. More importantly, since the output of a TRM has the same size as the input, CNNs with raw EEG signals as inputs can use this module without changing their original structures. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

20 pages, 7507 KiB  
Article
Biomechanical Modeling of Human–Robot Accident Scenarios: A Computational Assessment for Heavy-Payload-Capacity Robots
by Usman Asad, Shummaila Rasheed, Waqas Akbar Lughmani, Tayyaba Kazim, Azfar Khalid and Jürgen Pannek
Appl. Sci. 2023, 13(3), 1957; https://0-doi-org.brum.beds.ac.uk/10.3390/app13031957 - 02 Feb 2023
Cited by 1 | Viewed by 2626
Abstract
Exponentially growing technologies such as intelligent robots in the context of Industry 4.0 are radically changing traditional manufacturing to intelligent manufacturing with increased productivity and flexibility. Workspaces are being transformed into fully shared spaces for performing tasks during human–robot collaboration (HRC), increasing the [...] Read more.
Exponentially growing technologies such as intelligent robots in the context of Industry 4.0 are radically changing traditional manufacturing to intelligent manufacturing with increased productivity and flexibility. Workspaces are being transformed into fully shared spaces for performing tasks during human–robot collaboration (HRC), increasing the possibility of accidents as compared to the fully restricted and partially shared workspaces. The next technological epoch of Industry 5.0 has a heavy focus on human well-being, with humans and robots operating in synergy. However, the reluctance to adopt heavy-payload-capacity robots due to safety concerns is a major hurdle. Therefore, the importance of analyzing the level of injury after impact can never be neglected for the safety of workers and for designing a collaborative environment. In this study, quasi-static and dynamic analyses of accidental scenarios during HRC are performed for medium- and low-payload-capacity robots according to the conditions given in ISO TS 15066 to assess the threshold level of injury and pain, and is subsequently extended for high speeds and heavy payloads for collaborative robots. For this purpose, accidental scenarios are simulated in ANSYS using a 3D finite element model of an adult human index finger and hand, composed of cortical bone and soft tissue. Stresses and strains in the bone and tissue, and contact forces and energy transfer during impact are studied, and contact speed limit values are estimated. It is observed that heavy-payload-capacity robots must be restricted to 80% of the speed limit of low-payload-capacity robots. Biomechanical modeling of accident scenarios offers insights and, therefore, gives confidence in the adoption of heavy-payload robots in factories of the future. The analysis allows for prediction and assessment of different hypothetical accidental scenarios in HRC involving high speeds and heavy-payload-capacity robots. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

18 pages, 4805 KiB  
Article
Input Shape Effect on Classification Performance of Raw EEG Motor Imagery Signals with Convolutional Neural Networks for Use in Brain—Computer Interfaces
by Emre Arı and Ertuğrul Taçgın
Brain Sci. 2023, 13(2), 240; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci13020240 - 31 Jan 2023
Cited by 3 | Viewed by 1993
Abstract
EEG signals are interpreted, analyzed and classified by many researchers for use in brain–computer interfaces. Although there are many different EEG signal acquisition methods, one of the most interesting is motor imagery signals. Many different signal processing methods, machine learning and deep learning [...] Read more.
EEG signals are interpreted, analyzed and classified by many researchers for use in brain–computer interfaces. Although there are many different EEG signal acquisition methods, one of the most interesting is motor imagery signals. Many different signal processing methods, machine learning and deep learning models have been developed for the classification of motor imagery signals. Among these, Convolutional Neural Network models generally achieve better results than other models. Because the size and shape of the data is important for training Convolutional Neural Network models and discovering the right relationships, researchers have designed and experimented with many different input shape structures. However, no study has been found in the literature evaluating the effect of different input shapes on model performance and accuracy. In this study, the effects of different input shapes on model performance and accuracy in the classification of EEG motor imagery signals were investigated, which had not been specifically studied before. In addition, signal preprocessing methods, which take a long time before classification, were not used; rather, two CNN models were developed for training and classification using raw data. Two different datasets, BCI Competition IV 2A and 2B, were used in classification processes. For different input shapes, 53.03–89.29% classification accuracy and 2–23 s epoch time were obtained for 2A dataset, 64.84–84.94% classification accuracy and 4–10 s epoch time were obtained for 2B dataset. This study showed that the input shape has a significant effect on the classification performance, and when the correct input shape is selected and the correct CNN architecture is developed, feature extraction and classification can be done well by the CNN architecture without any signal preprocessing. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

18 pages, 4536 KiB  
Article
Hey Max, Can You Help Me? An Intuitive Virtual Assistant for Industrial Robots
by Chen Li, Dimitrios Chrysostomou, Daniela Pinto, Andreas Kornmaaler Hansen, Simon Bøgh and Ole Madsen
Appl. Sci. 2023, 13(1), 205; https://0-doi-org.brum.beds.ac.uk/10.3390/app13010205 - 23 Dec 2022
Cited by 3 | Viewed by 1684
Abstract
Assisting employees in acquiring the knowledge and skills necessary to use new services and technologies on the shop floor is critical for manufacturers to adapt to Industry 4.0 successfully. In this paper, we employ a learning, training, assistance-formats, issues, tools (LTA-FIT) approach and [...] Read more.
Assisting employees in acquiring the knowledge and skills necessary to use new services and technologies on the shop floor is critical for manufacturers to adapt to Industry 4.0 successfully. In this paper, we employ a learning, training, assistance-formats, issues, tools (LTA-FIT) approach and propose a framework for a language-enabled virtual assistant (VA) to facilitate this adaptation. In our system, the human–robot interaction is achieved through spoken natural language and a dashboard implemented as a web-based application. This type of interaction enables operators of all levels to control a collaborative robot intuitively in several industrial scenarios and use it as a complementary tool for developing their competencies. Our proposed framework has been tested with 29 users who completed various tasks while interacting with the proposed VA and industrial robots. Through three different scenarios, we evaluated the usability of the system for LTA-FIT based on an established system usability scale (SUS) and the cognitive effort required by the users based on the standardised NASA-TLX questionnaire. The qualitative and quantitative results of the study show that users of all levels found the VA user friendly with low requirements for physical and mental effort during the interaction. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

23 pages, 7671 KiB  
Article
Multimodal Warnings Design for In-Vehicle Robots under Driving Safety Scenarios
by Jianmin Wang, Chengji Wang, Yujia Liu, Tianyang Yue, Yuxi Wang and Fang You
Sensors 2023, 23(1), 156; https://0-doi-org.brum.beds.ac.uk/10.3390/s23010156 - 23 Dec 2022
Viewed by 2478
Abstract
In case of dangerous driving, the in-vehicle robot can provide multimodal warnings to help the driver correct the wrong operation, so the impact of the warning signal itself on driving safety needs to be reduced. This study investigates the design of multimodal warnings [...] Read more.
In case of dangerous driving, the in-vehicle robot can provide multimodal warnings to help the driver correct the wrong operation, so the impact of the warning signal itself on driving safety needs to be reduced. This study investigates the design of multimodal warnings for in-vehicle robots under driving safety warning scenarios. Based on transparency theory, this study addressed the content and timing of visual and auditory modality warning outputs and discussed the effects of different robot speech and facial expressions on driving safety. Two rounds of experiments were conducted on a driving simulator to collect vehicle data, subjective data, and behavioral data. The results showed that driving safety and workload were optimal when the robot was designed to use negative expressions for the visual modality during the comprehension (SAT 2) phase and speech at a rate of 345 words/minute for the auditory modality during the comprehension (SAT 2) and prediction (SAT 3) phases. The design guideline obtained from the study provides a reference for the interaction design of driver assistance systems with robots as the interface. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

23 pages, 2063 KiB  
Article
EEG-Based Emotion Recognition by Retargeted Semi-Supervised Regression with Robust Weights
by Ziyuan Chen, Shuzhe Duan and Yong Peng
Systems 2022, 10(6), 236; https://0-doi-org.brum.beds.ac.uk/10.3390/systems10060236 - 29 Nov 2022
Cited by 3 | Viewed by 1583
Abstract
The electroencephalogram (EEG) can objectively reflect the emotional state of human beings, and has attracted much attention in the academic circles in recent years. However, due to its weak, non-stationary, and low signal-to-noise properties, it is inclined to cause noise in the collected [...] Read more.
The electroencephalogram (EEG) can objectively reflect the emotional state of human beings, and has attracted much attention in the academic circles in recent years. However, due to its weak, non-stationary, and low signal-to-noise properties, it is inclined to cause noise in the collected EEG data. In addition, EEG features extracted from different frequency bands and channels usually exhibit different levels of emotional expression abilities in emotion recognition tasks. In this paper, we fully consider the characteristics of EEG and propose a new model RSRRW (retargeted semi-supervised regression with robust weights). The advantages of the new model can be listed as follows. (1) The probability weight is added to each sample so that it could help effectively search noisy samples in the dataset, and lower the effect of them at the same time. (2) The distance between samples from different categories is much wider than before by extending the ϵ-dragging method to a semi-supervised paradigm. (3) Automatically discover the EEG emotional activation mode by adaptively measuring the contribution of sample features through feature weights. In the three cross-session emotion recognition tasks, the average accuracy of the RSRRW model is 81.51%, which can be seen in the experimental results on the SEED-IV dataset. In addition, with the support of the Friedman test and Nemenyi test, the classification of RSRRW model is much more accurate than that of other models. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

20 pages, 4972 KiB  
Article
The Quantitative Research on Behavioral Intention towards 5G Rich Communication Services among University Students
by Zhiyuan Yu, Jianming Wu, Xiaoxiao Song, Wenzhao Fu and Chao Zhai
Systems 2022, 10(5), 136; https://0-doi-org.brum.beds.ac.uk/10.3390/systems10050136 - 01 Sep 2022
Viewed by 3174
Abstract
Supported by artificial intelligence and 5G techniques in mobile information systems, the rich communication services (RCS) are emerging as new media outlets and conversational agents for both institutional and individual users in China, which inherit the advantages of the short messaging service (SMS) [...] Read more.
Supported by artificial intelligence and 5G techniques in mobile information systems, the rich communication services (RCS) are emerging as new media outlets and conversational agents for both institutional and individual users in China, which inherit the advantages of the short messaging service (SMS) with larger coverage and higher reach rate. The benefits can be fulfilled through media interactions between business and smart phone users. As a competitor of over-the-top services and social media apps, the adoption of RCS will play a vital role for mobile users. It is important to conduct quantitative research and reveal the behavioral intention to use (BIU) among RCS users. In this paper, we collect 195 valid respondents from university via an offline experiment and then build a structural equation model consisting of task characteristics (TAC), technology characteristics (TEC), task-technology fit (TTF), performance expectancy (PE), perceived risk (PR), perceived trust (PT), perceived convenience (PC) and satisfaction (SA). We find that SA, PC and PE have direct impact on BIU. TTF has indirect path connecting to BIU via PE and SA. The impacts of PT and PR on BIU are not significant. Performance results show that our proposed model could explain 49.2% and 63.1% of variance for SA and BIU, respectively. Through revealing the influencing factors of BIU, we can point out the user perception of the brand-new interactive channel and then provide the guidance for the large-scale commercialization of 5G RCS. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

21 pages, 1855 KiB  
Article
A Robust and Low Computational Cost Pitch Estimation Method
by Desheng Wang, Yangjie Wei, Yi Wang and Jing Wang
Sensors 2022, 22(16), 6026; https://0-doi-org.brum.beds.ac.uk/10.3390/s22166026 - 12 Aug 2022
Cited by 2 | Viewed by 1995
Abstract
Pitch estimation is widely used in speech and audio signal processing. However, the current methods of modeling harmonic structure used for pitch estimation cannot always match the harmonic distribution of actual signals. Due to the structure of vocal tract, the acoustic nature of [...] Read more.
Pitch estimation is widely used in speech and audio signal processing. However, the current methods of modeling harmonic structure used for pitch estimation cannot always match the harmonic distribution of actual signals. Due to the structure of vocal tract, the acoustic nature of musical equipment, and the spectrum leakage issue, speech and audio signals’ harmonic frequencies often slightly deviate from the integer multiple of the pitch. This paper starts with the summation of residual harmonics (SRH) method and makes two main modifications. First, the spectral peak position constraint of strict integer multiple is modified to allow slight deviation, which benefits capturing harmonics. Second, a main pitch segment extension scheme with low computational cost feature is proposed to utilize the smooth prior of pitch more efficiently. Besides, the pitch segment extension scheme is also integrated into the SRH method’s voiced/unvoiced decision to reduce short-term errors. Accuracy comparison experiments with ten pitch estimation methods show that the proposed method has better overall accuracy and robustness. Time cost experiments show that the time cost of the proposed method reduces to around 1/8 of the state-of-the-art fast NLS method on the experimental computer. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

19 pages, 871 KiB  
Review
Research on Older Persons’ Access and Use of Technology in the Arab Region: Critical Overview and Future Directions
by Hajer Chalghoumi, Dena Al-Thani, Asma Hassan, Suzanne Hammad and Achraf Othman
Appl. Sci. 2022, 12(14), 7258; https://0-doi-org.brum.beds.ac.uk/10.3390/app12147258 - 19 Jul 2022
Cited by 4 | Viewed by 2372
Abstract
This paper presents the findings of a scoping review that maps exploratory evidence and gaps in research on information and communication technology (ICT) access and use among older persons in the Arab region. This review is part of a larger project that studies [...] Read more.
This paper presents the findings of a scoping review that maps exploratory evidence and gaps in research on information and communication technology (ICT) access and use among older persons in the Arab region. This review is part of a larger project that studies ICT access and use and related challenges faced by older adults in Qatar. A search was conducted in eleven scientific databases and search engines covering empirical studies published in English and Arabic between January 2016 and June 2021. Eleven studies were retrieved in the final corpus. A thematic analysis alongside the PRISMA for scoping reviews (PRISMA-ScR) was used to retrieve the findings. Our analysis identifies smartphones and social media applications for communication and information sharing as the most accessed and used technologies by older persons in the region. Moreover, our review highlighted the importance of the sociocultural factors in shaping ICT access and use by older persons in the region. The functional limitations of older persons in interaction with certain technology factors such as usability, functionality, and accessibility were also highlighted as major challenges inhibiting ICT access and use by this population segment. This scoping review provides a comprehensive overview of ICT access and use, and the factors affecting them among older persons in the Arab region. It highlights the scarcity of research on the subject in the region. It also stresses the fact that there is a need for more research on older persons and their caregivers in the context of the Arab world. More culturally appropriate need-based and adapted technologies are also recommended. Our review is a comprehensive source for researchers and technology developers interested in targeting and engaging older adults in the Arab region. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

16 pages, 6257 KiB  
Article
Reconstructing Synergy-Based Hand Grasp Kinematics from Electroencephalographic Signals
by Dingyi Pei, Parthan Olikkal, Tülay Adali and Ramana Vinjamuri
Sensors 2022, 22(14), 5349; https://0-doi-org.brum.beds.ac.uk/10.3390/s22145349 - 18 Jul 2022
Cited by 6 | Viewed by 2529
Abstract
Brain-machine interfaces (BMIs) have become increasingly popular in restoring the lost motor function in individuals with disabilities. Several research studies suggest that the CNS may employ synergies or movement primitives to reduce the complexity of control rather than controlling each DoF independently, and [...] Read more.
Brain-machine interfaces (BMIs) have become increasingly popular in restoring the lost motor function in individuals with disabilities. Several research studies suggest that the CNS may employ synergies or movement primitives to reduce the complexity of control rather than controlling each DoF independently, and the synergies can be used as an optimal control mechanism by the CNS in simplifying and achieving complex movements. Our group has previously demonstrated neural decoding of synergy-based hand movements and used synergies effectively in driving hand exoskeletons. In this study, ten healthy right-handed participants were asked to perform six types of hand grasps representative of the activities of daily living while their neural activities were recorded using electroencephalography (EEG). From half of the participants, hand kinematic synergies were derived, and a neural decoder was developed, based on the correlation between hand synergies and corresponding cortical activity, using multivariate linear regression. Using the synergies and the neural decoder derived from the first half of the participants and only cortical activities from the remaining half of the participants, their hand kinematics were reconstructed with an average accuracy above 70%. Potential applications of synergy-based BMIs for controlling assistive devices in individuals with upper limb motor deficits, implications of the results in individuals with stroke and the limitations of the study were discussed. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

19 pages, 3372 KiB  
Article
Estimation of Knee Extension Force Using Mechanomyography Signals Based on GRA and ICS-SVR
by Zebin Li, Lifu Gao, Wei Lu, Daqing Wang, Huibin Cao and Gang Zhang
Sensors 2022, 22(12), 4651; https://0-doi-org.brum.beds.ac.uk/10.3390/s22124651 - 20 Jun 2022
Cited by 1 | Viewed by 1724
Abstract
During lower-extremity rehabilitation training, muscle activity status needs to be monitored in real time to adjust the assisted force appropriately, but it is a challenging task to obtain muscle force noninvasively. Mechanomyography (MMG) signals offer unparalleled advantages over sEMG, reflecting the intention of [...] Read more.
During lower-extremity rehabilitation training, muscle activity status needs to be monitored in real time to adjust the assisted force appropriately, but it is a challenging task to obtain muscle force noninvasively. Mechanomyography (MMG) signals offer unparalleled advantages over sEMG, reflecting the intention of human movement while being noninvasive. Therefore, in this paper, based on MMG, a combined scheme of gray relational analysis (GRA) and support vector regression optimized by an improved cuckoo search algorithm (ICS-SVR) is proposed to estimate the knee joint extension force. Firstly, the features reflecting muscle activity comprehensively, such as time-domain features, frequency-domain features, time–frequency-domain features, and nonlinear dynamics features, were extracted from MMG signals, and the relational degree was calculated using the GRA method to obtain the correlation features with high relatedness to the knee joint extension force sequence. Then, a combination of correlated features with high relational degree was input into the designed ICS-SVR model for muscle force estimation. The experimental results show that the evaluation indices of the knee joint extension force estimation obtained by the combined scheme of GRA and ICS-SVR were superior to other regression models and could estimate the muscle force with higher estimation accuracy. It is further demonstrated that the proposed scheme can meet the need of muscle force estimation required for rehabilitation devices, powered prostheses, etc. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

12 pages, 723 KiB  
Article
Time-Invariant Features-Based Online Learning for Long-Term Notification Management: A Longitudinal Study
by Jemin Lee, Sihyeong Park, Taeho Kim and Hyungshin Kim
Appl. Sci. 2022, 12(11), 5432; https://0-doi-org.brum.beds.ac.uk/10.3390/app12115432 - 27 May 2022
Viewed by 1670
Abstract
The increasing number of daily notifications generated by smartphones and wearable devices increases mental burdens, deteriorates productivity, and results in energy waste. These phenomena are exacerbated by emerging use cases in which users are wearing and using an increasing number of personal mobile [...] Read more.
The increasing number of daily notifications generated by smartphones and wearable devices increases mental burdens, deteriorates productivity, and results in energy waste. These phenomena are exacerbated by emerging use cases in which users are wearing and using an increasing number of personal mobile devices, such as smartphones, smartwatches, AirPods, or tablets because all the devices can generate redundant notifications simultaneously. Therefore, in addition to distraction, redundant notifications triggered by multiple devices result in energy waste. Prior work proposed a notification management system called PASS, which automatically manipulates the occurrence of notifications based on personalized models. However, machine-learning-based models work poorly against new incoming notifications because prior work has not investigated behavior changes over time. To reduce the gap between modeling and real deployment when the model is to be used long-term, we conducted a longitudinal study with data collection over long-term periods. We collected an additional 11,258 notifications and analyzed 18,407 notifications, including the original dataset. The total study spans two years. Through a statistical test, we identified time-invariant features that can be fully used for training. To overcome the accuracy drop caused by newly occurring data, we design windowing time-invariant online learning (WTOL). In the newly collected dataset, WTOL improves the F-score of the original models based on batch learning from 44.3% to 69.0% by combining online learning and windowing features depending on time sensitivity. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

18 pages, 4087 KiB  
Article
Design of Proactive Interaction for In-Vehicle Robots Based on Transparency
by Jianmin Wang, Tianyang Yue, Yujia Liu, Yuxi Wang, Chengji Wang, Fei Yan and Fang You
Sensors 2022, 22(10), 3875; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103875 - 20 May 2022
Cited by 2 | Viewed by 2016
Abstract
Based on the transparency theory, this study investigates the appropriate amount of transparency information expressed by the in-vehicle robot under two channels of voice and visual in a proactive interaction scenario. The experiments are to test and evaluate different transparency levels and combinations [...] Read more.
Based on the transparency theory, this study investigates the appropriate amount of transparency information expressed by the in-vehicle robot under two channels of voice and visual in a proactive interaction scenario. The experiments are to test and evaluate different transparency levels and combinations of information in different channels of the in-vehicle robot, based on a driving simulator to collect subjective and objective data, which focuses on users’ safety, usability, trust, and emotion dimensions under driving conditions. The results show that appropriate transparency expression is able to improve drivers’ driving control and subjective evaluation and that drivers need a different amount of transparency information in different types of tasks. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

17 pages, 1386 KiB  
Article
Reliable Sarcoidosis Detection Using Chest X-rays with EfficientNets and Stain-Normalization Techniques
by Nadiah Baghdadi, Ahmed S. Maklad, Amer Malki and Mohanad A. Deif
Sensors 2022, 22(10), 3846; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103846 - 19 May 2022
Cited by 7 | Viewed by 2241
Abstract
Sarcoidosis is frequently misdiagnosed as tuberculosis (TB) and consequently mistreated due to inherent limitations in radiological presentations. Clinically, to distinguish sarcoidosis from TB, physicians usually employ biopsy tissue diagnosis and blood tests; this approach is painful for patients, time-consuming, expensive, and relies on [...] Read more.
Sarcoidosis is frequently misdiagnosed as tuberculosis (TB) and consequently mistreated due to inherent limitations in radiological presentations. Clinically, to distinguish sarcoidosis from TB, physicians usually employ biopsy tissue diagnosis and blood tests; this approach is painful for patients, time-consuming, expensive, and relies on techniques prone to human error. This study proposes a computer-aided diagnosis method to address these issues. This method examines seven EfficientNet designs that were fine-tuned and compared for their abilities to categorize X-ray images into three categories: normal, TB-infected, and sarcoidosis-infected. Furthermore, the effects of stain normalization on performance were investigated using Reinhard’s and Macenko’s conventional stain normalization procedures. This procedure aids in improving diagnostic efficiency and accuracy while cutting diagnostic costs. A database of 231 sarcoidosis-infected, 563 TB-infected, and 1010 normal chest X-ray images was created using public databases and information from several national hospitals. The EfficientNet-B4 model attained accuracy, sensitivity, and precision rates of 98.56%, 98.36%, and 98.67%, respectively, when the training X-ray images were normalized by the Reinhard stain approach, and 97.21%, 96.9%, and 97.11%, respectively, when normalized by Macenko’s approach. Results demonstrate that Reinhard stain normalization can improve the performance of EfficientNet -B4 X-ray image classification. The proposed framework for identifying pulmonary sarcoidosis may prove valuable in clinical use. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

22 pages, 2510 KiB  
Article
Motor Imagery Classification via Kernel-Based Domain Adaptation on an SPD Manifold
by Qin Jiang, Yi Zhang and Kai Zheng
Brain Sci. 2022, 12(5), 659; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci12050659 - 18 May 2022
Cited by 6 | Viewed by 2121
Abstract
Background: Recording the calibration data of a brain–computer interface is a laborious process and is an unpleasant experience for the subjects. Domain adaptation is an effective technology to remedy the shortage of target data by leveraging rich labeled data from the sources. However, [...] Read more.
Background: Recording the calibration data of a brain–computer interface is a laborious process and is an unpleasant experience for the subjects. Domain adaptation is an effective technology to remedy the shortage of target data by leveraging rich labeled data from the sources. However, most prior methods have needed to extract the features of the EEG signal first, which triggers another challenge in BCI classification, due to small sample sets or a lack of labels for the target. Methods: In this paper, we propose a novel domain adaptation framework, referred to as kernel-based Riemannian manifold domain adaptation (KMDA). KMDA circumvents the tedious feature extraction process by analyzing the covariance matrices of electroencephalogram (EEG) signals. Covariance matrices define a symmetric positive definite space (SPD) that can be described by Riemannian metrics. In KMDA, the covariance matrices are aligned in the Riemannian manifold, and then are mapped to a high dimensional space by a log-Euclidean metric Gaussian kernel, where subspace learning is performed by minimizing the conditional distribution distance between the sources and the target while preserving the target discriminative information. We also present an approach to convert the EEG trials into 2D frames (E-frames) to further lower the dimension of covariance descriptors. Results: Experiments on three EEG datasets demonstrated that KMDA outperforms several state-of-the-art domain adaptation methods in classification accuracy, with an average Kappa of 0.56 for BCI competition IV dataset IIa, 0.75 for BCI competition IV dataset IIIa, and an average accuracy of 81.56% for BCI competition III dataset IVa. Additionally, the overall accuracy was further improved by 5.28% with the E-frames. KMDA showed potential in addressing subject dependence and shortening the calibration time of motor imagery-based brain–computer interfaces. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

22 pages, 3360 KiB  
Article
Group Emotion Detection Based on Social Robot Perception
by Marco Quiroz, Raquel Patiño, José Diaz-Amado and Yudith Cardinale
Sensors 2022, 22(10), 3749; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103749 - 14 May 2022
Cited by 11 | Viewed by 2775
Abstract
Social robotics is an emerging area that is becoming present in social spaces, by introducing autonomous social robots. Social robots offer services, perform tasks, and interact with people in such social environments, demanding more efficient and complex Human–Robot Interaction (HRI) designs. A strategy [...] Read more.
Social robotics is an emerging area that is becoming present in social spaces, by introducing autonomous social robots. Social robots offer services, perform tasks, and interact with people in such social environments, demanding more efficient and complex Human–Robot Interaction (HRI) designs. A strategy to improve HRI is to provide robots with the capacity of detecting the emotions of the people around them to plan a trajectory, modify their behaviour, and generate an appropriate interaction with people based on the analysed information. However, in social environments in which it is common to find a group of persons, new approaches are needed in order to make robots able to recognise groups of people and the emotion of the groups, which can be also associated with a scene in which the group is participating. Some existing studies are focused on detecting group cohesion and the recognition of group emotions; nevertheless, these works do not focus on performing the recognition tasks from a robocentric perspective, considering the sensory capacity of robots. In this context, a system to recognise scenes in terms of groups of people, to then detect global (prevailing) emotions in a scene, is presented. The approach proposed to visualise and recognise emotions in typical HRI is based on the face size of people recognised by the robot during its navigation (face sizes decrease when the robot moves away from a group of people). On each frame of the video stream of the visual sensor, individual emotions are recognised based on the Visual Geometry Group (VGG) neural network pre-trained to recognise faces (VGGFace); then, to detect the emotion of the frame, individual emotions are aggregated with a fusion method, and consequently, to detect global (prevalent) emotion in the scene (group of people), the emotions of its constituent frames are also aggregated. Additionally, this work proposes a strategy to create datasets with images/videos in order to validate the estimation of emotions in scenes and personal emotions. Both datasets are generated in a simulated environment based on the Robot Operating System (ROS) from videos captured by robots through their sensory capabilities. Tests are performed in two simulated environments in ROS/Gazebo: a museum and a cafeteria. Results show that the accuracy in the detection of individual emotions is 99.79% and the detection of group emotion (scene emotion) in each frame is 90.84% and 89.78% in the cafeteria and the museum scenarios, respectively. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

11 pages, 1933 KiB  
Article
Finite-Time Interactive Control of Robots with Multiple Interaction Modes
by Jiantao Yang and Tairen Sun
Sensors 2022, 22(10), 3668; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103668 - 11 May 2022
Cited by 3 | Viewed by 1312
Abstract
This paper proposes a finite-time multi-modal robotic control strategy for physical human–robot interaction. The proposed multi-modal controller consists of a modified super-twisting-based finite-time control term that is designed in each interaction mode and a continuity-guaranteed control term. The finite-time control term guarantees finite-time [...] Read more.
This paper proposes a finite-time multi-modal robotic control strategy for physical human–robot interaction. The proposed multi-modal controller consists of a modified super-twisting-based finite-time control term that is designed in each interaction mode and a continuity-guaranteed control term. The finite-time control term guarantees finite-time achievement of the desired impedance dynamics in active interaction mode (AIM), makes the tracking error of the reference trajectory converge to zero in finite time in passive interaction mode (PIM), and also guarantees robotic motion stop in finite time in safety-stop mode (SSM). Meanwhile, the continuity-guaranteed control term guarantees control input continuity and steady interaction modes transition. The finite-time closed-loop control stability and the control effectiveness is validated by Lyapunov-based theoretical analysis and simulations on a robot manipulator. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

18 pages, 1024 KiB  
Article
Application of Noise Detection Using Confidence Learning in Lightweight Expression Recognition System
by Yu Zhao, Aiguo Song and Chaolong Qin
Appl. Sci. 2022, 12(10), 4808; https://0-doi-org.brum.beds.ac.uk/10.3390/app12104808 - 10 May 2022
Cited by 1 | Viewed by 1475
Abstract
Facial expression is an important carrier to reflect psychological emotion, and the lightweight expression recognition system with small-scale and high transportability is the basis of emotional interaction technology of intelligent robots. With the rapid development of deep learning, fine-grained expression classification based on [...] Read more.
Facial expression is an important carrier to reflect psychological emotion, and the lightweight expression recognition system with small-scale and high transportability is the basis of emotional interaction technology of intelligent robots. With the rapid development of deep learning, fine-grained expression classification based on the convolutional neural network has strong data-driven properties, and the quality of data has an important impact on the performance of the model. To solve the problem that the model has a strong dependence on the training dataset and weak generalization performance in real environments in a lightweight expression recognition system, an application method of confidence learning is proposed. The method modifies self-confidence and introduces two hyper-parameters to adjust the noise of the facial expression datasets. A lightweight model structure combining a deep separation convolution network and attention mechanism is adopted for noise detection and expression recognition. The effectiveness of dynamic noise detection is verified on datasets with different noise ratios. Optimization and model training is carried out on four public expression datasets, and the accuracy is improved by 4.41% on average in multiple test sample sets. A lightweight expression recognition system is developed, and the accuracy is significantly improved, which verifies the effectiveness of the application method. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

18 pages, 6879 KiB  
Article
Research on a User-Centered Evaluation Model for Audience Experience and Display Narrative of Digital Museums
by Lei Meng, Yuan Liu, Kaiwen Li and Ruimin Lyu
Electronics 2022, 11(9), 1445; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11091445 - 29 Apr 2022
Cited by 4 | Viewed by 2601
Abstract
As culture becomes a value dimension of economic and social development worldwide, museums as a social medium are given more missions and expectations. Mobile Internet technology is empowering digital museums in the epidemic context, bringing new public cultural service content to the public. [...] Read more.
As culture becomes a value dimension of economic and social development worldwide, museums as a social medium are given more missions and expectations. Mobile Internet technology is empowering digital museums in the epidemic context, bringing new public cultural service content to the public. In this paper, we focus on the website quality of user experience in the current construction of digital museums. By analyzing the components of 20 digital museums, three models with different tendencies are abstracted. Then the three models are implemented as prototype websites, and their user experience was evaluated by experiment. Result shows that website content and user identity differences affect website quality, user attitudes, and user intentions. Rich contextual information contributes to the experience, and the “professional group” generally agrees less with the digital museum experience than the “non-professional group”. This research has implications for the study of digital museum user groups, experience analysis, and content construction. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

21 pages, 5083 KiB  
Article
IMU Motion Capture Method with Adaptive Tremor Attenuation in Teleoperation Robot System
by Huijin Zhu, Xiaoling Li, Long Wang, Zhangyi Chen, Yueyang Shi, Shuai Zheng and Min Li
Sensors 2022, 22(9), 3353; https://0-doi-org.brum.beds.ac.uk/10.3390/s22093353 - 27 Apr 2022
Cited by 9 | Viewed by 2634
Abstract
Teleoperation robot systems can help humans perform tasks in unstructured environments. However, non-intuitive control interfaces using only a keyboard or joystick and physiological tremor reduce the performance of teleoperation. This paper presents an intuitive control interface based on the wearable device gForcePro+ armband. [...] Read more.
Teleoperation robot systems can help humans perform tasks in unstructured environments. However, non-intuitive control interfaces using only a keyboard or joystick and physiological tremor reduce the performance of teleoperation. This paper presents an intuitive control interface based on the wearable device gForcePro+ armband. Two gForcePro+ armbands are worn at the centroid of the upper arm and forearm, respectively. Firstly, the kinematics model of the human arm is established, and the inertial measurement units (IMUs) are used to capture the position and orientation information of the end of the arm. Then, a regression model of angular transformation is developed for the phenomenon that the rotation axis of the torsion joint is not perfectly aligned with the limb segment during motion, which can be applied to different individuals. Finally, to attenuate the physiological tremor, a variable gain extended Kalman filter (EKF) fusing sEMG signals is developed. The described control interface shows good attitude estimation accuracy compared to the VICON optical capture system, with an average angular RMSE of 4.837° ± 1.433°. The performance of the described filtering method is tested using the xMate3 Pro robot, and the results show it can improve the tracking performance of the robot and reduce the tremor. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

26 pages, 1138 KiB  
Review
Augmented Reality: Mapping Methods and Tools for Enhancing the Human Role in Healthcare HMI
by Chiara Innocente, Luca Ulrich, Sandro Moos and Enrico Vezzetti
Appl. Sci. 2022, 12(9), 4295; https://0-doi-org.brum.beds.ac.uk/10.3390/app12094295 - 24 Apr 2022
Cited by 13 | Viewed by 3309
Abstract
Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on [...] Read more.
Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on AR was evaluated, focusing on identifying the limitations and challenges of AR-based healthcare applications, to support the research and the development of further studies. Methods: Studies published from January 2018 to December 2021 were analyzed after a comprehensive search on PubMed, Google Scholar, Scopus, IEEE Xplore, Science Direct, and Wiley Online Library databases. In order to improve the review reporting, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used. Results: Authors selected sixty-two articles meeting the inclusion criteria, which were categorized according to the purpose of the study (intraoperative, training, rehabilitation) and according to the surgical procedure used. Conclusions: AR has the potential to improve orthopedic training and practice by providing an increasingly human-centered clinical approach. Further research can be addressed by this review to cover problems related to hardware limitations, lack of accurate registration and tracking systems, and absence of security protocols. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

19 pages, 2589 KiB  
Article
Automatic Speech Recognition Performance Improvement for Mandarin Based on Optimizing Gain Control Strategy
by Desheng Wang, Yangjie Wei, Ke Zhang, Dong Ji and Yi Wang
Sensors 2022, 22(8), 3027; https://0-doi-org.brum.beds.ac.uk/10.3390/s22083027 - 15 Apr 2022
Cited by 3 | Viewed by 2474
Abstract
Automatic speech recognition (ASR) is an essential technique of human–computer interactions; gain control is a commonly used operation in ASR. However, inappropriate gain control strategies can lead to an increase in the word error rate (WER) of ASR. As there is a current [...] Read more.
Automatic speech recognition (ASR) is an essential technique of human–computer interactions; gain control is a commonly used operation in ASR. However, inappropriate gain control strategies can lead to an increase in the word error rate (WER) of ASR. As there is a current lack of sufficient theoretical analyses and proof of the relationship between gain control and WER, various unconstrained gain control strategies have been adopted on realistic ASR systems, and the optimal gain control with respect to the lowest WER, is rarely achieved. A gain control strategy named maximized original signal transmission (MOST) is proposed in this study to minimize the adverse impact of gain control on ASR systems. First, by modeling the gain control strategy, the quantitative relationship between the gain control strategy and the ASR performance was established using the noise figure index. Second, through an analysis of the quantitative relationship, an optimal MOST gain control strategy with minimal performance degradation was theoretically deduced. Finally, comprehensive comparative experiments on a Mandarin dataset show that the proposed MOST gain control strategy can significantly reduce the WER of the experimental ASR system, with a 10% mean absolute WER reduction at −9 dB gain. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

18 pages, 13125 KiB  
Article
Remote Sensing System for Motor Nerve Impulse
by Carmen Aura Moldovan, Marian Ion, David Catalin Dragomir, Silviu Dinulescu, Carmen Mihailescu, Eduard Franti, Monica Dascalu, Lidia Dobrescu, Dragos Dobrescu, Mirela-Iuliana Gheorghe, Lars-Cyril Blystad, Per Alfred Ohlckers, Luca Marchetti, Kristin Imenes, Birgitte Kasin Hønsvall, Jairo Ramirez-Sarabia, Ioan Lascar, Tiberiu Paul Neagu, Stefania Raita, Ruxandra Costea, Adrian Barbilian, Florentina Gherghiceanu, Cristian Stoica, Catalin Niculae, Gabriel Predoi, Vlad Carbunaru, Octavian Ionescu and Ana Maria Oproiuadd Show full author list remove Hide full author list
Sensors 2022, 22(8), 2823; https://0-doi-org.brum.beds.ac.uk/10.3390/s22082823 - 07 Apr 2022
Cited by 1 | Viewed by 4193
Abstract
In this article, we present our research achievements regarding the development of a remote sensing system for motor pulse acquisition, as a first step towards a complete neuroprosthetic arm. We present the fabrication process of an implantable electrode for nerve impulse acquisition, together [...] Read more.
In this article, we present our research achievements regarding the development of a remote sensing system for motor pulse acquisition, as a first step towards a complete neuroprosthetic arm. We present the fabrication process of an implantable electrode for nerve impulse acquisition, together with an innovative wirelessly controlled system. In our study, these were combined into an implantable device for attachment to peripheral nerves. Mechanical and biocompatibility tests were performed, as well as in vivo testing on pigs using the developed system. This testing and the experimental results are presented in a comprehensive manner, demonstrating that the system is capable of accomplishing the requirements of its designed application. Most significantly, neural electrical signals were acquired and transmitted out of the body during animal experiments, which were conducted according to ethical regulations in the field. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

15 pages, 5580 KiB  
Article
Low-Computational-Cost Algorithm for Inclination Correction of Independent Handwritten Digits on Microcontrollers
by H. Waruna H. Premachandra, Maika Yamada, Chinthaka Premachandra and Hiroharu Kawanaka
Electronics 2022, 11(7), 1073; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11071073 - 29 Mar 2022
Cited by 1 | Viewed by 1747
Abstract
In recent years, the digitization of documents has progressed, and opportunities for handwritten document creation have decreased. However, handwritten notes are still taken for memorizing data, and automated digitalization is needed in some cases, such as making Excel sheets. When digitizing handwritten notes, [...] Read more.
In recent years, the digitization of documents has progressed, and opportunities for handwritten document creation have decreased. However, handwritten notes are still taken for memorizing data, and automated digitalization is needed in some cases, such as making Excel sheets. When digitizing handwritten notes, manual input is required. Therefore, the automatic recognition and input of characters using a character recognition system is useful. However, if the characters are inclined, the recognition rate will be low. Therefore, we focus on the inclination correction problem of characters. The conventional method corrects the inclination and estimates the character line inclination. However, these methods do not work when characters exist in independent positions. Therefore, in this study, we propose a new method for estimating and correcting the tilt of independent handwritten digits by analyzing a circumscribed rectangle and other digital features. The proposed method is not based on an AI-based learning model or a complicated mathematical model. It is developed following a comparatively simple mathematical calculation that can be implemented on a microcontroller. Based on the results of the experiments using digits written in independent positions, the proposed method can correct the inclination with high accuracy. Furthermore, the proposed algorithm is low-computational cost and can be implemented in real-time on a microcontroller. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

23 pages, 3227 KiB  
Article
Assessing Distinct Cognitive Workload Levels Associated with Unambiguous and Ambiguous Pronoun Resolutions in Human–Machine Interactions
by Mengyuan Zhao, Zhangyifan Ji, Jing Zhang, Yiwen Zhu, Chunhua Ye, Guangying Wang and Zhong Yin
Brain Sci. 2022, 12(3), 369; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci12030369 - 11 Mar 2022
Cited by 2 | Viewed by 1861
Abstract
Pronoun resolution plays an important role in language comprehension. However, little is known about its recruited cognitive mechanisms. Our investigation aims to explore the cognitive mechanisms underlying various types of pronoun resolution in Chinese using an electroencephalograph (EEG). We used three convolutional neural [...] Read more.
Pronoun resolution plays an important role in language comprehension. However, little is known about its recruited cognitive mechanisms. Our investigation aims to explore the cognitive mechanisms underlying various types of pronoun resolution in Chinese using an electroencephalograph (EEG). We used three convolutional neural networks (CNNs)—LeNeT-5, GoogleNet, and EffifcientNet—to discover high-level feature abstractions of the EEG spatial topologies. The output of the three models was then fused using different scales by principal component analysis (PCA) to achieve cognitive workload classification. Overall, the workload classification rate by fusing three deep networks can be achieved at 55–63% in a participant-specific manner. We provide evidence that both the behavioral indicator of reaction time and the neural indicator of cognitive workload collected during pronoun resolution vary depending on the type of the pronoun. We observed an increase in reaction time accompanied by a decrease of the theta power while participants were processing ambiguous pronoun resolution compared to unambiguous controls. We propose that ambiguous pronoun resolution involves a more time-consuming yet more flexible cognitive mechanism, consistent with the predictions of the decision-making framework from an influential pragmatic tradition. Our results extend previous research that the cognitive states of resolving ambiguous and unambiguous pronouns are differentiated, indicating that cognitive workload evaluated using the method of machine learning for analysis of EEG signals acts as a complementary indicator for studying pronoun resolution and serves as an important inspiration for human–machine interaction. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

14 pages, 4378 KiB  
Article
Constructing Data-Driven Personas through an Analysis of Mobile Application Store Data
by Daehee Park and Jeannie Kang
Appl. Sci. 2022, 12(6), 2869; https://0-doi-org.brum.beds.ac.uk/10.3390/app12062869 - 10 Mar 2022
Cited by 5 | Viewed by 3445
Abstract
As smartphone segments have become more complex in recent times, the importance of personas for designing and marketing has increased. Earlier, designers focused on traditional qualitative personas but have been criticised for the lack of evidence and outdated results. However, although several methods [...] Read more.
As smartphone segments have become more complex in recent times, the importance of personas for designing and marketing has increased. Earlier, designers focused on traditional qualitative personas but have been criticised for the lack of evidence and outdated results. However, although several methods of quantitative persona creation have been developed over the last few years, the use of mobile application store data has not yet been studied. In this research, we propose a framework using work domain analysis to help designers and marketers to build personas easily from mobile phone application store data. We considered the top 100 applications, which were ranked based on the number of devices using each application, how often each application was used, and the usage time. After proposing a new framework, we analysed data from a mobile application store in January and August 2020. We then created quantitative personas based on the data and discussed with experts whether the created personas successfully reflected real changes in mobile application trends. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

15 pages, 7307 KiB  
Article
A Grip Strength Estimation Method Using a Novel Flexible Sensor under Different Wrist Angles
by Yina Wang, Liwei Zheng, Junyou Yang and Shuoyu Wang
Sensors 2022, 22(5), 2002; https://0-doi-org.brum.beds.ac.uk/10.3390/s22052002 - 04 Mar 2022
Cited by 1 | Viewed by 2440
Abstract
It is a considerable challenge to realize the accurate, continuous detection of handgrip strength due to its complexity and uncertainty. To address this issue, a novel grip strength estimation method oriented toward the multi-wrist angle based on the development of a flexible deformation [...] Read more.
It is a considerable challenge to realize the accurate, continuous detection of handgrip strength due to its complexity and uncertainty. To address this issue, a novel grip strength estimation method oriented toward the multi-wrist angle based on the development of a flexible deformation sensor is proposed. The flexible deformation sensor consists of a foaming sponge, a Hall sensor, an LED, and photoresistors (PRs), which can measure the deformation of muscles with grip strength. When the external deformation squeezes the foaming sponge, its density and light intensity change, which is detected by a light-sensitive resistor. The light-sensitive resistor extended to the internal foaming sponge with illuminance complies with the extrusion of muscle deformation to enable relative muscle deformation measurement. Furthermore, to achieve the speed, accuracy, and continuous detection of grip strength with different wrist angles, a new grip strength-arm muscle model is adopted and a one-dimensional convolutional neural network based on the dynamic window is proposed to recognize wrist joints. Finally, all the experimental results demonstrate that our proposed flexible deformation sensor can accurately detect the muscle deformation of the arm, and the designed muscle model and convolutional neural network can continuously predict hand grip at different wrist angles in real-time. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

11 pages, 2185 KiB  
Brief Report
Entrapment of Binaural Auditory Beats in Subjects with Symptoms of Insomnia
by Eunyoung Lee, Youngrong Bang, In-Young Yoon and Ha-Yun Choi
Brain Sci. 2022, 12(3), 339; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci12030339 - 02 Mar 2022
Cited by 4 | Viewed by 4274
Abstract
Binaural beat (BB) stimulation, which has two different frequencies for each ear, is reportedly effective in reducing anxiety and controlling mood. This study aimed to evaluate the brain wave entrainment effect of binaural beats and to propose an effective and safe supplementary therapy [...] Read more.
Binaural beat (BB) stimulation, which has two different frequencies for each ear, is reportedly effective in reducing anxiety and controlling mood. This study aimed to evaluate the brain wave entrainment effect of binaural beats and to propose an effective and safe supplementary therapy for relieving the symptoms of insomnia. Subjects between 20 and 59 years of age with subclinical symptoms of insomnia were recruited from the community. Quantitative electroencephalography was measured twice, before and two weeks after the BB intervention. Participants used the apparatus with or without 6 Hz BB for 30 min before going to bed for two weeks. When music with BB was played, the relative theta power increased (occipital, p = 0.009). After two weeks of intervention with music, the theta power increased when listening to music with BB (parietal, p = 0.009). After listening to music with BB for two weeks, the decrease in the beta power was more noticeable than after using music-only devices when participants listened to music in the laboratory (occipital, p = 0.035). When BB were played, the entrapment of the theta wave appeared. Therefore, exposure to music with BB is likely to reduce the hyper-arousal state and contribute to sleep induction. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

17 pages, 7136 KiB  
Article
The Indicator of GDI Engine Operating Mode and Its Influence on Eco-Driving
by Grzegorz Marek Pawlak and Zbigniew Wołczyński
Appl. Sci. 2022, 12(5), 2325; https://0-doi-org.brum.beds.ac.uk/10.3390/app12052325 - 23 Feb 2022
Viewed by 2184
Abstract
Elements of car construction, especially the information available on a dashboard, can stimulate the way of driving. The experiment described in the paper aimed to examine how the information provided by the indicator, which informs about the operational mode of a gasoline direct [...] Read more.
Elements of car construction, especially the information available on a dashboard, can stimulate the way of driving. The experiment described in the paper aimed to examine how the information provided by the indicator, which informs about the operational mode of a gasoline direct injection (GDI) engine, can contribute to eco-driving and the possible learning of acceleration pedal operation by a driver. The analysis of the fuel injection process affected by driver behaviour was an essential part of the experiment. The experiment was divided into two parts. The first one (nine tests) consisted of driving without access to the indicator information. In the second part, the information on the mode of the engine run was available for the driver. The results confirmed that the information about the type of fuel mixture used for the supply of the GDI engine facilitates an economical driving style (about 10% fuel savings) and motivates the driver to engage in eco-driving. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

19 pages, 8371 KiB  
Article
Motion Simulation and Human–Computer Interaction System for Lunar Exploration
by Yuzhen Xie, Zihan Tang and Aiguo Song
Appl. Sci. 2022, 12(5), 2312; https://0-doi-org.brum.beds.ac.uk/10.3390/app12052312 - 23 Feb 2022
Cited by 2 | Viewed by 1887
Abstract
When planning lunar rover missions, it is important to develop intuition and driving skills for unfamiliar environments before incurring the costs of reaching the moon. Simulators make it possible to operate in environments that have the physical characteristics of target locations without the [...] Read more.
When planning lunar rover missions, it is important to develop intuition and driving skills for unfamiliar environments before incurring the costs of reaching the moon. Simulators make it possible to operate in environments that have the physical characteristics of target locations without the expense of extensive physical tests. This paper proposes a motion simulation and human–computer interaction system based on a parallel mechanism to realize high-fidelity manned lunar rover simulations. The system consists of an interactive operating platform and a lunar surface simulation environment based on Unity3D. To make the 6-DOF platform simulate the posture changes of the rover, we improved the motion simulation algorithm. We designed a posture adjustment system and built virtual sensors to help astronauts perceive the lunar environment. Finally, this paper discusses the method for the realization of the multi-channel human–computer interaction system; astronauts can interactively control the rover through five channels. Experiments show that this system can realize high-fidelity rover simulation and improve the efficiency of human-computer interaction. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

23 pages, 916 KiB  
Article
fMRI Brain Decoding and Its Applications in Brain–Computer Interface: A Survey
by Bing Du, Xiaomu Cheng, Yiping Duan and Huansheng Ning
Brain Sci. 2022, 12(2), 228; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci12020228 - 07 Feb 2022
Cited by 10 | Viewed by 6014
Abstract
Brain neural activity decoding is an important branch of neuroscience research and a key technology for the brain–computer interface (BCI). Researchers initially developed simple linear models and machine learning algorithms to classify and recognize brain activities. With the great success of deep learning [...] Read more.
Brain neural activity decoding is an important branch of neuroscience research and a key technology for the brain–computer interface (BCI). Researchers initially developed simple linear models and machine learning algorithms to classify and recognize brain activities. With the great success of deep learning on image recognition and generation, deep neural networks (DNN) have been engaged in reconstructing visual stimuli from human brain activity via functional magnetic resonance imaging (fMRI). In this paper, we reviewed the brain activity decoding models based on machine learning and deep learning algorithms. Specifically, we focused on current brain activity decoding models with high attention: variational auto-encoder (VAE), generative confrontation network (GAN), and the graph convolutional network (GCN). Furthermore, brain neural-activity-decoding-enabled fMRI-based BCI applications in mental and psychological disease treatment are presented to illustrate the positive correlation between brain decoding and BCI. Finally, existing challenges and future research directions are addressed. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

15 pages, 3401 KiB  
Article
Measuring Cognition Load Using Eye-Tracking Parameters Based on Algorithm Description Tools
by Jozsef Katona
Sensors 2022, 22(3), 912; https://0-doi-org.brum.beds.ac.uk/10.3390/s22030912 - 25 Jan 2022
Cited by 27 | Viewed by 4070
Abstract
Writing a computer program is a complex cognitive task, especially for a new person in the field. In this research an eye-tracking system was developed and applied, which allows the observation of eye movement parameters during programming as a complex, cognitive process, and [...] Read more.
Writing a computer program is a complex cognitive task, especially for a new person in the field. In this research an eye-tracking system was developed and applied, which allows the observation of eye movement parameters during programming as a complex, cognitive process, and the conclusions can be drawn from the results. The aim of the paper is to examine whether the flowchart or Nassi–Shneiderman diagram is a more efficient algorithm descripting tool for describing cognitive load by recording and evaluating eye movement parameters. The results show that the case of the interpreting flowchart has significantly longer fixation duration, more number of fixations, and larger pupil diameter than the case of the Nassi–Shneiderman diagram interpreting. Based on the results of the study, it is clear how important it is to choose the right programming tools for efficient and lower cost application development. Full article
(This article belongs to the Topic Human–Machine Interaction)
Show Figures

Figure 1

Back to TopTop