Eye Tracking in Human–Computer Interaction

A special issue of Vision (ISSN 2411-5150).

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 10514

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Communication and Environment, Rhine-Waal University of Applied Sciences, Kamp-Lintfort, Germany
Interests: eye tracking; human computer interaction; kognitive assistive systems; human factors; usability engineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Mediablix IIT-GmbH, Bielefeld, Germany
Interests: machine learning; cognitive models; eye tracking and computer security

Special Issue Information

Dear Colleagues,

With the recent progress in hardware and software, eye tracking solutions have become lighter, less obtrusive, and cheaper. This is particularly true for mobile systems, which are becoming more and more similar to glasses. These systems can not only be used in different application areas, but can also be operated for longer periods of time. Along with recent technological advances, such as smart glasses, wearables, Internet of Things devices, smart homes, natural human–computer interaction (HCI) interfaces, advanced driver assistance systems, Industry 4.0, personal digital assistants, mobile assistance and diagnostic systems, as well as in AI and ML, completely new forms of personalized, anticipative and intuitive HCIs are emerging.

To provide an overview of these recent developments on this fascinating and rapidly developing area of natural and intuitive HCI, we invite submissions to a Special Issue on a range of topics related to the theme of “Eye tracking in human–computer interaction”.

Congruent with the overall aim of the journal, we hope to stimulate a useful and interdisciplinary interchange between individuals working primarily on fundamental and theoretical issues and those working on applied aspects.

We are particularly interested in papers that propose the application of eye tracking, alone as well as in combination with other methods (like sensory or language), within this broad field.  Such papers can be explorative, empirical, meta-analytic or technical. Papers that seek to bridge the gap between theoretical and applied aspects, as well as papers that study human action and perception behavior and its integration in natural and intuitive forms of HCI, will be especially welcome.

Some examples of possible topics for this Special Issue are provided below. These should be viewed as a suggestive list, rather than an exhaustive catalog. Individuals unsure about whether a proposed submission would be appropriate are invited to contact one of the Special Issue Editors, Kai Essig or André Krause.

Applications of Eye Movements in HCI

  • Visual interaction/communication with machines
  • Security measures to protect the privacy and data of HCI users
  • Eye–hand span/coordination
  • Visual control of machines (such as robots, production machines)
  • Multi-modal approaches to HCI—such as natural user interfaces (NUI)
  • Eye trackers as input and control devices for handicapped people

Eye Movements for Personalized HCI

  • Vision-based assistive systems
  • AI methods for personalized and anticipative HCI
  • Vison-based models for HCI
  • Prediction of human perception and action behavior from eye and body movements

Eye Movements in AR/VR in Interactive HCI Simulations

  • Eye tracking in interactive VR/AR simulations
  • Autonomous cars and advanced driver assistance systems
  • Foveated rendering techniques
  • Interface simulation and evaluation in VR/AR

Cultural and Psychological Aspects of Eye Movements in HCI

  • Cultural differences in HCI
  • Perception of emotions
  • Perception and interaction with avatars
  • Measurement and analysis of cognitive processes (e.g. task-related cognitive load)

Technical and Evaluation Aspects

  • Automatic analysis of eye movements in HCI
  • Low cost eyetrackers for affordable HCI
  • Eye tracking for human augmentation (e.g. autofocus glasses)
  • Robust outdoor eye tracking solutions (e.g. insensitive to sunlight, slip, vibration, moisture etc.)
  • Evaluation of user interfaces/HCI interactions
  • Summary or outline paper on current status and future developments

Prof. Dr. Kai Essig
Dr. Andre Krause
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Vision is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 1496 KiB  
Article
Attentional Orienting in Front and Rear Spaces in a Virtual Reality Discrimination Task
by Rébaï Soret, Pom Charras, Christophe Hurter and Vsevolod Peysakhovich
Vision 2022, 6(1), 3; https://0-doi-org.brum.beds.ac.uk/10.3390/vision6010003 - 06 Jan 2022
Cited by 1 | Viewed by 2424
Abstract
Recent studies on covert attention suggested that the visual processing of information in front of us is different, depending on whether the information is present in front of us or if it is a reflection of information behind us (mirror information). This difference [...] Read more.
Recent studies on covert attention suggested that the visual processing of information in front of us is different, depending on whether the information is present in front of us or if it is a reflection of information behind us (mirror information). This difference in processing suggests that we have different processes for directing our attention to objects in front of us (front space) or behind us (rear space). In this study, we investigated the effects of attentional orienting in front and rear space consecutive of visual or auditory endogenous cues. Twenty-one participants performed a modified version of the Posner paradigm in virtual reality during a spaceship discrimination task. An eye tracker integrated into the virtual reality headset was used to make sure that the participants did not move their eyes and used their covert attention. The results show that informative cues produced faster response times than non-informative cues but no impact on target identification was observed. In addition, we observed faster response times when the target occurred in front space rather than in rear space. These results are consistent with an orienting cognitive process differentiation in the front and rear spaces. Several explanations are discussed. No effect was found on subjects’ eye movements, suggesting that participants did not use their overt attention to improve task performance. Full article
(This article belongs to the Special Issue Eye Tracking in Human–Computer Interaction)
Show Figures

Figure 1

10 pages, 1167 KiB  
Article
Characteristics of Visual Saliency Caused by Character Feature for Reconstruction of Saliency Map Model
by Hironobu Takano, Taira Nagashima and Kiyomi Nakamura
Vision 2021, 5(4), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/vision5040049 - 19 Oct 2021
Viewed by 2203
Abstract
Visual saliency maps have been developed to estimate the bottom-up visual attention of humans. A conventional saliency map represents a bottom-up visual attention using image features such as the intensity, orientation, and color. However, it is difficult to estimate the visual attention using [...] Read more.
Visual saliency maps have been developed to estimate the bottom-up visual attention of humans. A conventional saliency map represents a bottom-up visual attention using image features such as the intensity, orientation, and color. However, it is difficult to estimate the visual attention using a conventional saliency map in the case of a top-down visual attention. In this study, we investigate the visual saliency for characters by applying still images including both characters and symbols. The experimental results indicate that characters have specific visual saliency independent of the type of language. Full article
(This article belongs to the Special Issue Eye Tracking in Human–Computer Interaction)
Show Figures

Figure 1

28 pages, 1075 KiB  
Article
High-Accuracy Gaze Estimation for Interpolation-Based Eye-Tracking Methods
by Fabricio Batista Narcizo, Fernando Eustáquio Dantas dos Santos and Dan Witzner Hansen
Vision 2021, 5(3), 41; https://doi.org/10.3390/vision5030041 - 15 Sep 2021
Cited by 5 | Viewed by 4159
Abstract
This study investigates the influence of the eye-camera location associated with the accuracy and precision of interpolation-based eye-tracking methods. Several factors can negatively influence gaze estimation methods when building a commercial or off-the-shelf eye tracker device, including the eye-camera location in uncalibrated setups. [...] Read more.
This study investigates the influence of the eye-camera location associated with the accuracy and precision of interpolation-based eye-tracking methods. Several factors can negatively influence gaze estimation methods when building a commercial or off-the-shelf eye tracker device, including the eye-camera location in uncalibrated setups. Our experiments show that the eye-camera location combined with the non-coplanarity of the eye plane deforms the eye feature distribution when the eye-camera is far from the eye’s optical axis. This paper proposes geometric transformation methods to reshape the eye feature distribution based on the virtual alignment of the eye-camera in the center of the eye’s optical axis. The data analysis uses eye-tracking data from a simulated environment and an experiment with 83 volunteer participants (55 males and 28 females). We evaluate the improvements achieved with the proposed methods using Gaussian analysis, which defines a range for high-accuracy gaze estimation between 0.5 and 0.5. Compared to traditional polynomial-based and homography-based gaze estimation methods, the proposed methods increase the number of gaze estimations in the high-accuracy range. Full article
(This article belongs to the Special Issue Eye Tracking in Human–Computer Interaction)
Show Figures

Figure 1

Back to TopTop