sensors-logo

Journal Browser

Journal Browser

Recent Advances in Human-Computer Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (15 February 2022) | Viewed by 31269

Special Issue Editors

ETSI Informáticos - Universidad Politécnica de Madrid, Campus de Montegancedo s/n, 28040 - Boadilla del Monte, 28040 Madrid, Spain
Interests: user experience (UX) in mobile applications for older users; integration of UX practices into software engineering and technology adoption for e-health and mobile-based systems
Pontificia Universidad Católica del Peru- Av. Universitaria 1801, San Miguel-15088-Lima, Peru
Interests: human–computer interaction; software engineering; software effort estimation; engineering and computing education
Department of Computing, Xi’an Jiaotong-Liverpool University, SD447 (Science Building), 111 Ren’ai Road, Dushu Lake Science and Education Innovation District, Suzhou 215123, China
Interests: human–computer interaction; virtual and augmented reality; gaming technologies
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The field of human–computer interaction (HCI) is concerned with the design of interactive information technology, aimed to be operated by human users, focusing on interaction technology and the design of the user interface for an improved user experience (UX).

Some interaction technologies, such as ubiquitous computing, augmented reality, virtual reality, wearable sensors, immersive data visualization, or gesture-based interaction, have gone mainstream. New domains have emerged that require the design of an appropriate HCI, such as autonomous vehicles, personal health-monitoring devices, Industry 4.0, or fintech.

The base of users of IT has been growing steadily, with mobile devices being the first Internet-connected computers that millions of users will use, including older users, who are embracing technology when they have enough motivation and the UX is specifically designed taking into account their different cognitive and physical abilities.

In this Special Issue, we aim to provide a forum for colleagues to report the most up-to-date research results in the HCI field, as well as comprehensive surveys of the state-of-the-art in relevant specific areas. Both original contributions with theoretical novelty and practical solutions for addressing particular problems in HCI are solicited.

The topics of interest include, but are not limited to:

  • Interaction technologies for sensor-based environments.
  • Visualization of data from heterogeneous sources.
  • HCI in smart and AI-based systems.
  • Human–robot interaction.
  • Ubiquitous computing and implicit interaction.
  • UX design for older users.
  • Mobile interaction design.
  • Multimodal interaction.
  • Usability of IoT systems.
  • HCI in eHealth and wellbeing systems.
  • Brain–computer interaction.
  • New HCI methods for UX design and evaluation.
  • Intelligent user interfaces.
  • 3D user interfaces for virtual and augmented reality systems.
  • Haptics interfaces.

All submissions will be peer-reviewed, and accepted papers will be published immediately. Submitted papers should not be under consideration for publication elsewhere.

Prof. Dr. Xavier Ferre
Prof. Dr. Jose Antonio Pow-Sang
Prof. Dr. Hai-Ning Liang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • usability
  • UX
  • UCD
  • HCI
  • usability heuristics
  • design guidelines
  • interaction design
  • multimodal interfaces
  • user interface design

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

28 pages, 6691 KiB  
Article
On the Use of Large Interactive Displays to Support Collaborative Engagement and Visual Exploratory Tasks
by Lei Chen, Hai-Ning Liang, Jialin Wang, Yuanying Qu and Yong Yue
Sensors 2021, 21(24), 8403; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248403 - 16 Dec 2021
Cited by 3 | Viewed by 2897
Abstract
Large interactive displays can provide suitable workspaces for learners to conduct collaborative learning tasks with visual information in co-located settings. In this research, we explored the use of these displays to support collaborative engagement and exploratory tasks with visual representations. Our investigation looked [...] Read more.
Large interactive displays can provide suitable workspaces for learners to conduct collaborative learning tasks with visual information in co-located settings. In this research, we explored the use of these displays to support collaborative engagement and exploratory tasks with visual representations. Our investigation looked at the effect of four factors (number of virtual workspaces within the display, number of displays, position arrangement of the collaborators, and collaborative modes of interaction) on learners’ knowledge acquisition, engagement level, and task performance. To this end, a user study was conducted with 72 participants divided into 6 groups using an interactive tool developed to support the collaborative exploration of 3D visual structures. The results of this study showed that learners with one shared workspace and one single display can achieve better user performance and engagement levels. In addition, the back-to-back position with learners sharing their view and control of the workspaces was the most favorable. It also led to improved learning outcomes and engagement levels during the collaboration process. Full article
(This article belongs to the Special Issue Recent Advances in Human-Computer Interaction)
Show Figures

Figure 1

27 pages, 22607 KiB  
Article
An Architecture for Collaborative Terrain Sketching with Mobile Devices
by Sonia Mendoza, Andrés Cortés-Dávalos, Luis Martín Sánchez-Adame and Dominique Decouchant
Sensors 2021, 21(23), 7881; https://0-doi-org.brum.beds.ac.uk/10.3390/s21237881 - 26 Nov 2021
Cited by 2 | Viewed by 1628
Abstract
3D terrains used in digital animations and videogames are typically created by several collaborators with a single-user application, which constrains them to update the shared terrain from their PCs, using a turn-taking strategy. Moreover, collaborators have to visualize the terrain through 2D views, [...] Read more.
3D terrains used in digital animations and videogames are typically created by several collaborators with a single-user application, which constrains them to update the shared terrain from their PCs, using a turn-taking strategy. Moreover, collaborators have to visualize the terrain through 2D views, confusing novice users when conceiving its shape in 3D. In this article, we describe an architecture for collaborative applications, which allow co-located users to sketch a terrain using their mobile devices concurrently. Two interaction modes are supplied: the standard one and an augmented reality-based mode, which helps collaborators understand the 3D terrain shape. Using the painting with brushesparadigm, users can modify the terrain while visualizing its shape evolution through the camera of their devices. Work coordination is promoted by enriching the 3D space with each collaborator’s avatar, which provides awareness information about identity, location, and current action. We implemented a collaborative application from this architecture that was tested by groups of users, who assessed its hedonic and pragmatic qualities in both interaction modes and compared them with the qualities of a similar Web terrain editor. The results showed that the augmented reality mode of our prototype was considered more attractive and usable by the participants. Full article
(This article belongs to the Special Issue Recent Advances in Human-Computer Interaction)
Show Figures

Figure 1

26 pages, 751 KiB  
Article
Understanding UX Better: A New Technique to Go beyond Emotion Assessment
by Leonardo Marques, Patrícia Gomes Matsubara, Walter Takashi Nakamura, Bruna Moraes Ferreira, Igor Scaliante Wiese, Bruno Freitas Gadelha, Luciana Martinez Zaina, David Redmiles and Tayana Uchôa Conte
Sensors 2021, 21(21), 7183; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217183 - 29 Oct 2021
Cited by 7 | Viewed by 3389
Abstract
User experience (UX) is a quality aspect that considers the emotions evoked by the system, extending the usability concept beyond effectiveness, efficiency, and satisfaction. Practitioners and researchers are aware of the importance of evaluating UX. Thus, UX evaluation is a growing field with [...] Read more.
User experience (UX) is a quality aspect that considers the emotions evoked by the system, extending the usability concept beyond effectiveness, efficiency, and satisfaction. Practitioners and researchers are aware of the importance of evaluating UX. Thus, UX evaluation is a growing field with diverse approaches. Despite various approaches, most of them produce a general indication of the experience as a result and do not seek to capture the problem that gave rise to the bad UX. This information makes it difficult to obtain relevant results to improve the application, making it challenging to identify what caused a negative user experience. To address this gap, we developed a UX evaluation technique called UX-Tips. This paper presents UX-Tips and reports two empirical studies performed in an academic and an industrial setting to evaluate it. Our results show that UX-Tips had good performance in terms of efficiency and effectiveness, making it possible to identify the causes that led to a negative user experience, and it was easy to use. In this sense, we present a new technique suitable for use in both academic and industrial settings, allowing UX evaluation and finding the problems that may lead to a negative experience. Full article
(This article belongs to the Special Issue Recent Advances in Human-Computer Interaction)
Show Figures

Figure 1

22 pages, 27420 KiB  
Article
Optimizing Sensor Position with Virtual Sensors in Human Activity Recognition System Design
by Chengshuo Xia and Yuta Sugiura
Sensors 2021, 21(20), 6893; https://0-doi-org.brum.beds.ac.uk/10.3390/s21206893 - 18 Oct 2021
Cited by 11 | Viewed by 2730
Abstract
Human activity recognition (HAR) systems combined with machine learning normally serve users based on a fixed sensor position interface. Variations in the installing position will alter the performance of the recognition and will require a new training dataset. Therefore, we need to understand [...] Read more.
Human activity recognition (HAR) systems combined with machine learning normally serve users based on a fixed sensor position interface. Variations in the installing position will alter the performance of the recognition and will require a new training dataset. Therefore, we need to understand the role of sensor position in HAR system design to optimize its effect. In this paper, we designed an optimization scheme with virtual sensor data for the HAR system. The system is able to generate the optimal sensor position from all possible locations under a given sensor number. Utilizing virtual sensor data, the training dataset can be accessed at low cost. The system can help the decision-making process of sensor position selection with great accuracy using feedback, as well as output the classifier at a lower cost than a conventional training model. Full article
(This article belongs to the Special Issue Recent Advances in Human-Computer Interaction)
Show Figures

Figure 1

20 pages, 1894 KiB  
Article
Gaze Focalization System for Driving Applications Using OpenFace 2.0 Toolkit with NARMAX Algorithm in Accidental Scenarios
by Javier Araluce, Luis M. Bergasa, Manuel Ocaña, Elena López-Guillén, Pedro A. Revenga, J. Felipe Arango and Oscar Pérez
Sensors 2021, 21(18), 6262; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186262 - 18 Sep 2021
Cited by 7 | Viewed by 2437
Abstract
Monitoring driver attention using the gaze estimation is a typical approach used on road scenes. This indicator is of great importance for safe driving, specially on Level 3 and Level 4 automation systems, where the take over request control strategy could be based [...] Read more.
Monitoring driver attention using the gaze estimation is a typical approach used on road scenes. This indicator is of great importance for safe driving, specially on Level 3 and Level 4 automation systems, where the take over request control strategy could be based on the driver’s gaze estimation. Nowadays, gaze estimation techniques used in the state-of-the-art are intrusive and costly, and these two aspects are limiting the usage of these techniques on real vehicles. To test this kind of application, there are some databases focused on critical situations in simulation, but they do not show real accidents because of the complexity and the danger to record them. Within this context, this paper presents a low-cost and non-intrusive camera-based gaze mapping system integrating the open-source state-of-the-art OpenFace 2.0 Toolkit to visualize the driver focalization on a database composed of recorded real traffic scenes through a heat map using NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) to establish the correspondence between the OpenFace 2.0 parameters and the screen region the user is looking at. This proposal is an improvement of our previous work, which was based on a linear approximation using a projection matrix. The proposal has been validated using the recent and challenging public database DADA2000, which has 2000 video sequences with annotated driving scenarios based on real accidents. We compare our proposal with our previous one and with an expensive desktop-mounted eye-tracker, obtaining on par results. We proved that this method can be used to record driver attention databases. Full article
(This article belongs to the Special Issue Recent Advances in Human-Computer Interaction)
Show Figures

Figure 1

19 pages, 1789 KiB  
Article
Are UX Evaluation Methods Providing the Same Big Picture?
by Walter Takashi Nakamura, Iftekhar Ahmed, David Redmiles, Edson Oliveira, David Fernandes, Elaine H. T. de Oliveira and Tayana Conte
Sensors 2021, 21(10), 3480; https://0-doi-org.brum.beds.ac.uk/10.3390/s21103480 - 17 May 2021
Cited by 5 | Viewed by 2866
Abstract
The success of a software application is related to users’ willingness to keep using it. In this sense, evaluating User eXperience (UX) became an important part of the software development process. Researchers have been carrying out studies by employing various methods to evaluate [...] Read more.
The success of a software application is related to users’ willingness to keep using it. In this sense, evaluating User eXperience (UX) became an important part of the software development process. Researchers have been carrying out studies by employing various methods to evaluate the UX of software products. Some studies reported varied and even contradictory results when applying different UX evaluation methods, making it difficult for practitioners to identify which results to rely upon. However, these works did not evaluate the developers’ perspectives and their impacts on the decision process. Moreover, such studies focused on one-shot evaluations, which cannot assess whether the methods provide the same big picture of the experience (i.e., deteriorating, improving, or stable). This paper presents a longitudinal study in which 68 students evaluated the UX of an online judge system by employing AttrakDiff, UEQ, and Sentence Completion methods at three moments along a semester. This study reveals contrasting results between the methods, which affected developers’ decisions and interpretations. With this work, we intend to draw the HCI community’s attention to the contrast between different UX evaluation methods and the impact of their outcomes in the software development process. Full article
(This article belongs to the Special Issue Recent Advances in Human-Computer Interaction)
Show Figures

Figure 1

25 pages, 5371 KiB  
Article
How to Interact with a Fully Autonomous Vehicle: Naturalistic Ways for Drivers to Intervene in the Vehicle System While Performing Non-Driving Related Tasks
by Aya Ataya, Won Kim, Ahmed Elsharkawy and SeungJun Kim
Sensors 2021, 21(6), 2206; https://0-doi-org.brum.beds.ac.uk/10.3390/s21062206 - 21 Mar 2021
Cited by 8 | Viewed by 3990
Abstract
Autonomous vehicle technology increasingly allows drivers to turn their primary attention to secondary tasks (e.g., eating or working). This dramatic behavior change thus requires new input modalities to support driver–vehicle interaction, which must match the driver’s in-vehicle activities and the interaction situation. Prior [...] Read more.
Autonomous vehicle technology increasingly allows drivers to turn their primary attention to secondary tasks (e.g., eating or working). This dramatic behavior change thus requires new input modalities to support driver–vehicle interaction, which must match the driver’s in-vehicle activities and the interaction situation. Prior studies that addressed this question did not consider how acceptance for inputs was affected by the physical and cognitive levels experienced by drivers engaged in Non-driving Related Tasks (NDRTs) or how their acceptance varies according to the interaction situation. This study investigates naturalistic interactions with a fully autonomous vehicle system in different intervention scenarios while drivers perform NDRTs. We presented an online methodology to 360 participants showing four NDRTs with different physical and cognitive engagement levels, and tested the six most common intervention scenarios (24 cases). Participants evaluated our proposed seven natural input interactions for each case: touch, voice, hand gesture, and their combinations. Results show that NDRTs influence the driver’s input interaction more than intervention scenario categories. In contrast, variation of physical load has more influence on input selection than variation of cognitive load. We also present a decision-making model of driver preferences to determine the most natural inputs and help User Experience designers better meet drivers’ needs. Full article
(This article belongs to the Special Issue Recent Advances in Human-Computer Interaction)
Show Figures

Figure 1

Review

Jump to: Research

31 pages, 2350 KiB  
Review
A Review of Data Gathering Methods for Evaluating Socially Assistive Systems
by Shi Qiu, Pengcheng An, Kai Kang, Jun Hu, Ting Han and Matthias Rauterberg
Sensors 2022, 22(1), 82; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010082 - 23 Dec 2021
Cited by 10 | Viewed by 4389
Abstract
Social interactions significantly impact the quality of life for people with special needs (e.g., older adults with dementia and children with autism). They may suffer loneliness and social isolation more often than people without disabilities. There is a growing demand for technologies to [...] Read more.
Social interactions significantly impact the quality of life for people with special needs (e.g., older adults with dementia and children with autism). They may suffer loneliness and social isolation more often than people without disabilities. There is a growing demand for technologies to satisfy the social needs of such user groups. However, evaluating these systems can be challenging due to the extra difficulty of gathering data from people with special needs (e.g., communication barriers involving older adults with dementia and children with autism). Thus, in this systematic review, we focus on studying data gathering methods for evaluating socially assistive systems (SAS). Six academic databases (i.e., Scopus, Web of Science, ACM, Science Direct, PubMed, and IEEE Xplore) were searched, covering articles published from January 2000 to July 2021. A total of 65 articles met the inclusion criteria for this systematic review. The results showed that existing SASs most often targeted people with visual impairments, older adults, and children with autism. For instance, a common type of SASs aimed to help blind people perceive social signals (e.g., facial expressions). SASs were most commonly assessed with interviews, questionnaires, and observation data. Around half of the interview studies only involved target users, while the other half also included secondary users or stakeholders. Questionnaires were mostly used with older adults and people with visual impairments to measure their social interaction, emotional state, and system usability. A great majority of observational studies were carried out with users in special age groups, especially older adults and children with autism. We thereby contribute an overview of how different data gathering methods were used with various target users of SASs. Relevant insights are extracted to inform future development and research. Full article
(This article belongs to the Special Issue Recent Advances in Human-Computer Interaction)
Show Figures

Figure 1

23 pages, 4158 KiB  
Review
A Systematic Mapping Study on Integration Proposals of the Personas Technique in Agile Methodologies
by Patricia Losana, John W. Castro, Xavier Ferre, Elena Villalba-Mora and Silvia T. Acuña
Sensors 2021, 21(18), 6298; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186298 - 20 Sep 2021
Cited by 5 | Viewed by 3233
Abstract
Agile development processes are increasing their consideration of usability by integrating various user-centered design techniques throughout development. One such technique is Personas, which proposes the creation of fictitious users with real preferences to drive application design. Since applying this technique conflicts with the [...] Read more.
Agile development processes are increasing their consideration of usability by integrating various user-centered design techniques throughout development. One such technique is Personas, which proposes the creation of fictitious users with real preferences to drive application design. Since applying this technique conflicts with the time constraints of agile development, Personas has been adapted over the years. Our objective is to determine the adoption level and type of integration, as well as to propose improvements to the Personas technique for agile development. A systematic mapping study was performed, retrieving 28 articles grouped by agile methodology type. We found some common integration strategies regardless of the specific agile approach, along with some frequent problems, mainly related to Persona modelling and context representation. Based on these limitations, we propose an adaptation to the technique in order to reduce the creation time for a preliminary persona. The number of publications dealing with Personas and agile development is increasing, which reveals a growing interest in the application of this technique to develop usable agile software. Full article
(This article belongs to the Special Issue Recent Advances in Human-Computer Interaction)
Show Figures

Figure 1

Back to TopTop