New Technologies and Applications of Visual-Based Human–Computer Interactions

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 October 2024 | Viewed by 42

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Communication and Environment, Rhine-Waal University of Applied Sciences, Kamp-Lintfort, Germany
Interests: eye tracking; human computer interaction; kognitive assistive systems; human factors; usability engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Current human–computer interactions (HCIs) are mainly established either via traditional (such as mouse or keyboards) or modern input devices (such as remote controls joysticks or gloves—particularly in VR environments). These input devices provide limited input capabilities (i.e., pointing, selecting, dragging, aiming, etc.), are inconvenient to use, do not provide inputs with a sufficiently high degree of freedom, and are expensive.

With the progress and easier availability of AI and computer vision techniques as well as camera and information processing hardware, a wide range of information about the users (e.g., facial expressions, uncertainty, posture, gestures, and general activity) can be extracted from the videostreams of several cameras in realtime and serve as a passive, non-intrusive, non-contact, and natural input alternative for HCIs. Eye gaze and facial expressions can be used to find what the user is focusing on and whether they are currently unsure or understand the current situation. Detected hand movements and gestures can be applied to control systems or—in their combination with gaze—to anticipate which object the user intends to grasp (eye–hand span). The visual information can also be used with or to complement other communication channels, such as speech, the keyboard, and the mouse. Therefore, visual-based HCI is probably the most widespread research area to investigate natural and intuitive interfaces between humans and technical systems.

Although there has been much progress in the field of visual-based HCIs over the last decade, many research questions still remain, such as which techniques should be used when (time-), where (context-) and in which combination, how they can be made more robust and faster, and how visual information in general should be used in HCIs.

To provide an overview of these recent developments on this fascinating and rapidly developing area of visual-based HCIs, we invite submissions on a range of topics to a Special Issue titled “New Technologies and Applications of Visual-Based Human–Computer Interactions”.

Congruent with the overall aim of the journal, we hope to stimulate a useful and interdisciplinary interchange between individuals working primarily on fundamental and theoretical issues and those working on applied aspects.

We are particularly interested in papers that propose the application of visual-based HCI techniques, alone as well as in combination with other modalities (such as speech, mouse, and keyboard), within this broad field. Such papers can be explorative, empirical, meta-analytic, or technical. Papers that seek to bridge the gap between theoretical and applied aspects, as well as papers that study human action and perception behavior and its integration in natural and intuitive forms of HCI, will be especially welcome.

Some examples of possible topics for this Special Issue are provided below. These should be viewed as a suggestive list, rather than an exhaustive catalog. Individuals unsure about whether a proposed submission would be appropriate are invited to contact the Special Issue Guest Editor Kai Essig.

  1. Recognizing and analyzing emotions through facial expressions;
  2. Heads and face location detection and tracking;
  3. Tracking and analysis of (large-scale) body movements;
  4. Hand tracking and gesture recognition for natural machine control;
  5. Recognition of postures, gaits, gestures, and general activities;
  6. Understanding users’ attention and intention from eye movements;
  7. Visual-based HCI techniques alone as well as in combination with other modalities (such as speech, mouse, and keyboard);
  8. Applications areas of visual-based HCI, such as security and surveillance; 
  9. Storage and processing of visual data repositories in databases;
  10. Visual formalisms and mechanisms supporting both the presentation and the interaction with data;
  11. Evaluation of visual-based HCI;
  12. Summary or outline paper on current status and future developments;
  13. Private and data security issues.

Prof. Dr. Kai Essig
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visual-based HCI
  • AI
  • camera stream processing
  • multi-modal interfaces
  • movement
  • intention
  • emotion and face recognition
  • activity
  • intention and gesture recognition
  • presentation of visual data
  • interactive data visualizations
  • multi-modal databases

Published Papers

This special issue is now open for submission.
Back to TopTop