New Insights into Human-Computer Interaction

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 7271

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
Interests: high performance graphical processing and visualization; interactive application development; parallel and distributed processing; user interface development methodology; computer games; visual analytics; machine learning

E-Mail Website
Guest Editor
Computer Science Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
Interests: web technologies; interactive application development; spatial data processing; augmented and virtual reality; user interface development methodology; machine learning; interactive system security; user interface evaluation

Special Issue Information

Dear Colleagues,

The computer is an advanced tool that models and simulates the complex real world. Such a matching between reality and this virtual model requires humans to interact in a natural way with both. Processing resources are ubiquitous, and user interaction can be achieved through a variety of channels of language, graphics, voice, gestures, and sensors.

The element of human interaction, when inserted into the processing loop, allows the combination of high-performance computation with the analytical capacity of the human brain. Moreover, interaction with the virtual model offers new and interesting features, such as time extension and compression, multidimensional space exploration, various and repeated simulations, visual analysis, and assisted strategies and convergence towards the optimal solution.

The current challenges in this field are diverse and are concerned with defining and experimenting with new concepts, development methodologies, architectures, devices, techniques and styles of interaction, and intelligent and adaptive interfaces.

Therefore, this Special Issue aims to present new proposals and experimental results in the field of human-computer interaction, and highlight the broad spectrum of development phases from concepts and theory, analysis, architectural solutions, design, and implementation, to evaluation and experimental validation.

Prof. Dr. Dorian Gorgan
Dr. Teodor Ștefănuț
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • interaction paradigms
  • emerging technologies for user interaction
  • user interaction techniques and styles
  • user interaction development methodologies
  • adaptive and intelligent user interaction
  • human in the loop
  • multi-dimensional space navigation
  • visual analytics
  • multimodal interaction
  • computer game development
  • scientific visualization
  • collaborative work and learning
  • user interface evaluation and validation

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 1668 KiB  
Article
Collaborative System Usability in Spaceflight Analog Environments through Remote Observations
by Shivang Shelat, Jessica J. Marquez, Jimin Zheng and John A. Karasinski
Appl. Sci. 2024, 14(5), 2005; https://0-doi-org.brum.beds.ac.uk/10.3390/app14052005 - 28 Feb 2024
Viewed by 451
Abstract
The conventional design cycle in human–computer interaction faces significant challenges when applied to users in isolated settings, such as astronauts in extreme environments. Challenges include obtaining user feedback and effectively tracking human–software/human–human dynamics during system interactions. This study addresses these issues by exploring [...] Read more.
The conventional design cycle in human–computer interaction faces significant challenges when applied to users in isolated settings, such as astronauts in extreme environments. Challenges include obtaining user feedback and effectively tracking human–software/human–human dynamics during system interactions. This study addresses these issues by exploring the potential of remote conversation analysis to validate the usability of collaborative technology, supplemented with a traditional post hoc survey approach. Specifically, we evaluate an integrated timeline software tool used in NASA’s Human Exploration Research Analog. Our findings indicate that voice recordings, which focus on the topical content of intra-crew speech, can serve as non-intrusive metrics for essential dynamics in human–machine interactions. The results emphasize the collaborative nature of the self-scheduling process and suggest that tracking conversations may serve as a viable proxy for assessing workload in remote environments. Full article
(This article belongs to the Special Issue New Insights into Human-Computer Interaction)
Show Figures

Figure 1

12 pages, 1386 KiB  
Article
Machine Learning-Supported Designing of Human–Machine Interfaces
by László Bántay and János Abonyi
Appl. Sci. 2024, 14(4), 1564; https://0-doi-org.brum.beds.ac.uk/10.3390/app14041564 - 15 Feb 2024
Viewed by 592
Abstract
The design and functionality of the human–machine interface (HMI) significantly affects operational efficiency and safety related to process control. Alarm management techniques consider the cognitive model of operators, but mainly only from a signal perception point of view. To develop a human-centric alarm [...] Read more.
The design and functionality of the human–machine interface (HMI) significantly affects operational efficiency and safety related to process control. Alarm management techniques consider the cognitive model of operators, but mainly only from a signal perception point of view. To develop a human-centric alarm management system, the construction of an easy-to-use and supportive HMI is essential. This work suggests a development method that uses machine learning (ML) tools. The key idea is that more supportive higher-level HMI displays can be developed by analysing operator-related events in the process log file. The obtained process model contains relevant data on the relationship of the process events, enabling a network-like visualisation. Attributes of the network allow us to solve the minimisation problem of the ideal workflow–display relation. The suggested approach allows a targeted process pattern exploration to design higher-level HMI displays with respect to content and hierarchy. The method was applied in a real-life hydrofluoric acid alkylation plant, where a proposal was made about the content of an overview display. Full article
(This article belongs to the Special Issue New Insights into Human-Computer Interaction)
Show Figures

Figure 1

25 pages, 3835 KiB  
Article
Shared eHMI: Bridging Human–Machine Understanding in Autonomous Wheelchair Navigation
by Xiaochen Zhang, Ziyang Song, Qianbo Huang, Ziyi Pan, Wujing Li, Ruining Gong and Bi Zhao
Appl. Sci. 2024, 14(1), 463; https://0-doi-org.brum.beds.ac.uk/10.3390/app14010463 - 04 Jan 2024
Viewed by 1055
Abstract
As automated driving system (ADS) technology is adopted in wheelchairs, clarity on the vehicle’s imminent path becomes essential for both users and pedestrians. For users, understanding the imminent path helps mitigate anxiety and facilitates real-time adjustments. For pedestrians, this insight aids in predicting [...] Read more.
As automated driving system (ADS) technology is adopted in wheelchairs, clarity on the vehicle’s imminent path becomes essential for both users and pedestrians. For users, understanding the imminent path helps mitigate anxiety and facilitates real-time adjustments. For pedestrians, this insight aids in predicting their next move when near the wheelchair. This study introduces an on-ground projection-based shared eHMI approach for autonomous wheelchairs. By visualizing imminent motion intentions on the ground by integrating real and virtual elements, the approach quickly clarifies wheelchair behaviors for all parties, promoting proactive measures to reduce collision risks and ensure smooth wheelchair driving. To explore the practical application of the shared eHMI, a user interface was designed and incorporated into an autonomous wheelchair simulation platform. An observation-based pilot study was conducted with both experienced wheelchair users and pedestrians using structured questionnaires to assess the usability, user experience, and social acceptance of this interaction. The results indicate that the proposed shared eHMI offers clearer motion intentions display and appeal, emphasizing its potential contribution to the field. Future work should focus on improving visibility, practicality, safety, and trust in autonomous wheelchair interactions. Full article
(This article belongs to the Special Issue New Insights into Human-Computer Interaction)
Show Figures

Figure 1

19 pages, 10669 KiB  
Article
A Parallel Multimodal Integration Framework and Application for Cake Shopping
by Hui Fang, Dongdong Weng and Zeyu Tian
Appl. Sci. 2024, 14(1), 299; https://0-doi-org.brum.beds.ac.uk/10.3390/app14010299 - 29 Dec 2023
Viewed by 545
Abstract
Multimodal interaction systems can provide users with natural and compelling interactive experiences. Despite the availability of various sensing devices, only some commercial multimodal applications are available. One reason may be the need for a more efficient framework for fusing heterogeneous data and addressing [...] Read more.
Multimodal interaction systems can provide users with natural and compelling interactive experiences. Despite the availability of various sensing devices, only some commercial multimodal applications are available. One reason may be the need for a more efficient framework for fusing heterogeneous data and addressing resource pressure. This paper presents a parallel multimodal integration framework that ensures that the errors and external damages of integrated devices remain uncorrelated. The proposed relative weighted fusion method and modality delay strategy process the heterogeneous data at the decision level. The parallel modality operation flow allows each device to operate across multiple terminals, reducing resource demands on a single computer. The universal fusion methods and independent devices further remove constraints on the integrated modality number, providing the framework with extensibility. Based on the framework, we develop a multimodal virtual shopping system, integrating five input modalities and three output modalities. The objective experiments show that the system can accurately fuse heterogeneous data and understand interaction intent. User studies indicate the immersive and entertaining of multimodal shopping. Our framework proposes a development paradigm for multimodal systems, fostering multimodal applications across various domains. Full article
(This article belongs to the Special Issue New Insights into Human-Computer Interaction)
Show Figures

Figure 1

29 pages, 1816 KiB  
Article
A Methodology to Evaluate User Experience for People with Autism Spectrum Disorder
by Katherine Valencia, Cristian Rusu, Federico Botella and Erick Jamet
Appl. Sci. 2022, 12(22), 11340; https://0-doi-org.brum.beds.ac.uk/10.3390/app122211340 - 08 Nov 2022
Viewed by 1945
Abstract
People with Autism Spectrum Disorder (ASD) have an affinity for technology, which is why multiple studies have implemented different technological proposals focused on the development of skills in people with ASD. Studies have evaluated the user experience (UX) and/or usability of their technological [...] Read more.
People with Autism Spectrum Disorder (ASD) have an affinity for technology, which is why multiple studies have implemented different technological proposals focused on the development of skills in people with ASD. Studies have evaluated the user experience (UX) and/or usability of their technological proposals through different evaluation methods, so they can be friendly and usable for users with ASD. However, the evaluation methods and instruments used do not consider the specific characteristics and needs of people with ASD, and furthermore, details are lacking in their implementations. To formalize the UX evaluation process, we propose a three-stage methodology to evaluate the UX in systems, products and services used by adults with ASD. The methodology considers in its processes, evaluation methods and instruments the characteristics of people with ASD so that, through the UX evaluation, the satisfaction and perception of these users about the system, product or service evaluated is improved. This proposal has been validated through the opinions of experts with knowledge in UX/Usability and ASD in two instances, which have contributed to specify, restructure, and improve the methodology. Full article
(This article belongs to the Special Issue New Insights into Human-Computer Interaction)
Show Figures

Figure 1

29 pages, 11733 KiB  
Article
Visual Signifier for Large Multi-Touch Display to Support Interaction in a Virtual Museum Interface
by Saipunidzam Mahamad, Fasihah Mohammad Shuhaili, Suziah Sulaiman, Dayang Rohaya Awang Rambli and Abdullateef Oluwagbemiga Balogun
Appl. Sci. 2022, 12(21), 11191; https://0-doi-org.brum.beds.ac.uk/10.3390/app122111191 - 04 Nov 2022
Cited by 1 | Viewed by 1506
Abstract
The signifier is regarded as a crucial part of interface design since it ensures that the user can manage the device appropriately and understand the interaction that is taking place. Useful signifiers keep users’ attention on learning, but poorly designed signifiers can disrupt [...] Read more.
The signifier is regarded as a crucial part of interface design since it ensures that the user can manage the device appropriately and understand the interaction that is taking place. Useful signifiers keep users’ attention on learning, but poorly designed signifiers can disrupt learning by slowing progress and making it harder to use the interface. The problem is that prior research identified the qualities of signifiers, but their attributes in terms of being visually apparent in broad interaction areas were not well recognized. Implementing the signifier without sufficient visual features such as a picture, figure or gesture may interfere with the user’s ability to navigate the surface, particularly when dealing with domains that demand “leisure exploration,” such as those in culture and heritage, and notably the museum application. As technology has evolved and expanded, adopting a multi-touch tabletop as a medium of viewing should be advantageous in conserving cultural heritage. As technology advances and improves, employing a multi-touch tabletop as a public viewing medium should be advantageous in maintaining cultural heritage. Some visual elements should be incorporated into the signifier to produce a conspicuous presentation and make it easier for users to identify. In this study, a preliminary study, a card sorting survey, and a high-fidelity experiment were used to investigate users’ experience, perspective, and interpretation of the visual signifier of the museum interface for large displays. This work offered a set of integrated visual signifiers on a big multi-touch display that makes a substantial contribution to supporting navigation and interaction on a large display, therefore aiding comprehension of the exhibited information visualization. Full article
(This article belongs to the Special Issue New Insights into Human-Computer Interaction)
Show Figures

Figure 1

Back to TopTop