Next Article in Journal
Choice, Control and Computers: Empowering Wildlife in Human Care
Previous Article in Journal
The Role of Simulators in Interdisciplinary Medical Work
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Controller-Free Hand Tracking for Grab-and-Place Tasks in Immersive Virtual Reality: Design Elements and Their Empirical Study

1
Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587 Berlin, Germany
2
Faculty of Philosophy, Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10117 Berlin, Germany
3
Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
*
Authors to whom correspondence should be addressed.
Multimodal Technol. Interact. 2020, 4(4), 91; https://0-doi-org.brum.beds.ac.uk/10.3390/mti4040091
Submission received: 19 October 2020 / Revised: 4 December 2020 / Accepted: 10 December 2020 / Published: 12 December 2020

Abstract

:
Hand tracking enables controller-free interaction with virtual environments, which can, compared to traditional handheld controllers, make virtual reality (VR) experiences more natural and immersive. As naturalness hinges on both technological and user-based features, fine-tuning the former while assessing the latter can be used to increase usability. For a grab-and-place use case in immersive VR, we compared a prototype of a camera-based hand tracking interface (Leap Motion) with customized design elements to the standard Leap Motion application programming interface (API) and a traditional controller solution (Oculus Touch). Usability was tested in 32 young healthy participants, whose performance was analyzed in terms of accuracy, speed and errors as well as subjective experience. We found higher performance and overall usability as well as overall preference for the handheld controller compared to both controller-free solutions. While most measures did not differ between the two controller-free solutions, the modifications made to the Leap API to form our prototype led to a significant decrease in accidental drops. Our results do not support the assumption of higher naturalness for hand tracking but suggest design elements to improve the robustness of controller-free object interaction in a grab-and-place scenario.

1. Introduction

Immersive Virtual Reality (VR) enables the user to “dive into” a computer-generated 3D environment. Scenarios in immersive VR are typically presented on a head-mounted display (HMD) that blocks out vision of the outside world and adapts the virtual environment based on the user’s head movements, thereby enabling a sense of presence (i.e., a feeling of actually being in the virtual world despite knowing it to be an illusion [1,2]). The popularity of immersive VR has been increasing for a wide range of applications; for example, for entertainment, education and industry but also research and clinical purposes [3]. Commercial VR systems (e.g., Oculus Rift, HTC Vive) enable interaction with virtual environments and objects (beyond head movements) typically by way of handheld controllers (e.g., the Oculus Touch). Such setups have been used, for example, for safety and equipment training in mining (e.g., [4]) or manufacturing scenarios (e.g., [5]). In basic research, VR enables naturalistic (i.e., dynamic, multisensory, interactive) experiments while maintaining full experimental control and allowing precise measurements of the participant’s behavior [6,7,8,9]. Effects found in such naturalistic experiments promise to generalize to real-world scenarios. The same features make VR a promising tool for clinical applications, where it could improve diagnostic precision and increase a therapy’s transfer effects—the degree to which it improves the patient’s life outside the clinic [10,11,12].

1.1. Controller-Free Object Interaction

Controllers, while necessary for interaction in gaming, can be cumbersome: they are extra equipment to carry and maintain, and they may require a learning period for those unaccustomed to this mode of interaction. Controller-free hand-tracking technology offers an alternative: it allows users to interact with virtual objects using their bare hands, and decreases the amount of equipment needed as well as time to get acquainted with the interface. It can be assumed that interacting with objects in VR as one would in real life (i.e., without being mediated by a controller) increases the intuitiveness and naturalness of interaction, which could influence the device’s usability and performance as well as the general efficacy and outcome of VR-based applications. However, higher naturalness does not always entail higher performance or usability, as the association between realism (i.e., the degree to which device interaction resembles features of real life) and the experience of naturalness may be U-shaped, with higher but also lower levels of realism generating a more natural experience than medium levels [13] (compared to the “uncanny valley” effect for virtual humans). Camera-based interaction is also central to other immersive technologies such as augmented reality (AR) or mixed reality (MR), which often do not provide controllers. In addition, controller-free interaction can increase the accessibility of VR technology for user groups that are unable or uncomfortable to use controllers (e.g., older adults, children or patients).

1.2. Controller-Free Interaction with the Leap Motion

The Leap Motion is a small, lightweight optical sensor that uses infrared signals to detect hand and finger motions for camera-based, controller-free (i.e., contact-/touch-less) human–computer interaction [14]. It can be attached to the front of a VR HMD to detect the user’s hands in immersive VR, and it provides a modifiable application program interface (API) and software development kit (SDK). Its feasibility and usability has been tested in various contexts (e.g., [5,10,15,16,17]).
Other sensors that allow hand tracking, such as the Microsoft Kinect, take an “outside in” approach [18] by positioning the camera to face the user, thus requiring additional setup in the room. Data gloves (e.g., by Manus VR: https://www.manus-vr.com/) can be used for gesture recognition and may include haptic feedback, but, like controllers, they come with the disadvantages of additional equipment.

1.3. Design Challenges to Hand Tracking

Challenges when using a hand-tracking interface in VR (e.g., the Leap Motion) include detection errors that can occur when the user’s hands are occluded from view of the sensor or camera, which typically happens more often with camera-based than with controller-based position tracking. In contrast to, for example, the Oculus Touch controller, the Leap Motion requires the user to direct the head towards the hands, to keep the hands within the field of view of the camera. Video-based hand tracking processing could also lead to larger temporal offsets between the virtual hands and the user’s real hands than controller-based interactions [19]. In addition, without additional haptic (e.g., vibro-tactile) feedback, the sense of touch provides no information about virtual objects [2,17]. This absence of feedback, and the type of gestures afforded by camera-based interaction, generate an experience which does not fully resemble real-world hand-based object interaction in the real world, which may require some time to get acquainted with it. While one cannot control a user’s prior experience with a technology, a well-designed interface can make an unfamiliar technological experience more intuitive [20].

1.4. Task and Prototype

The Leap Motion is a common, affordable, camera-based sensor for hand tracking. Leveraging its modifiability, we designed a prototype using the basic Leap API. We optimized the interface for the gestures of grabbing and placing objects on a table: gestures commonly used in neurological testing [10] and industry (i.e., [4]). Grab-and-place tasks are also common in VR interaction research [5,15,16]. The following design elements were introduced, based on our own experiences and previous studies:
  • Colour: Smart object colouring (a green “spotlight” emitted from the virtual hand-see Figure 1) to indicate when an object is in grabbing distance. Color indicators on the virtual hand, the virtual object or both have been shown to improve time on task, accuracy of placement and subjective user experience [5,15,16].
  • Grab restriction: The user can only grab the object after first making an open-hand gesture within grabbing distance of the object, in order to prevent accidental grabs.
  • Transparency: Semi-transparent hand representation as long as no object is grabbed, to allow the user to see the object even if it is occluded by the hand (Figure 1).
  • Grabbing area: The grabbing area is extended so that near misses (following an open hand gesture, see above) are still able to grab the object.
  • Velocity restriction: If the hand is moving above a certain velocity, grabbing cannot take place, in order to prevent uncontrolled grabs and accidental drops).
  • Trajectory ensurance: Once the object is released from the hand, rogue finger placement cannot alter the trajectory of the falling object.
  • Acoustic support: Audio feedback occurs when an object is grabbed and when an object touches the table surface after release (pre-installed sounds available in the Unity library).

1.5. Present Study and Hypotheses

In this study, we assessed the usability of camera-based object interaction for a grab-and-place use case in immersive VR, which is common in VR-based rehabilitation and industry applications. We modified the basic implementation of Leap Motion-based hand tracking and created the HHI_Leap prototype (described above). We then compared it to the basic Leap Motion (B_Leap) and the Oculus Touch controller. Performance was evaluated using measures of accuracy, speed and errors (accidental drops) as well as subjective experience (self-report questionnaires).

Hypotheses

Based on the literature and our own experiences, we had the following hypotheses:
HHI_Leap vs. Oculus
  • Performance measures: The HHI_Leap shows lower accuracy (greater distance from target), higher times (total time, grab time and release time) and more errors (accidental drops) than the Oculus controller.
  • Subjective measures: The HHI_Leap is rated higher than the Oculus controller for naturalness and intuitiveness. For all other subjective measures (other individual ratings, SUS, overall preference), we did not have hypotheses (exploratory analyses).
HHI_Leap vs. B_Leap
  • Performance measures: The HHI_Leap shows higher accuracy (greater distance from target), lower times (total time, grab time and release time) and fewer errors (accidental drops) than the B_Leap.
  • Subjective measures: The HHI_Leap is rated higher than the B_Leap on all subjective measures (individual rating questions, SUS, overall preference).

2. Methods

2.1. Study Design

In a repeated-measures design with interface (B_Leap, HHI_Leap, Oculus controller) as the independent variable (Figure 2), all participants completed a grab-and-place task with all interfaces in randomized order (for task details, see Section 2.3). Performance was measured during the grab and place task, and participants answered questions about their experience after each interface (Figure 3).

2.2. Sample

Participants were recruited from staff at the Fraunhofer Heinrich Hertz Institute who were not involved in this research project. Exclusion criteria were injuries or vision impairments that would prevent them from completing the task. The sample comprised 32 participants (23 males, 8 females, 1 undisclosed) between 22 and 36 years of age (M = 27.8, SD = 3.34). The majority of participants (n = 20) reported “no” (n = 10) or “some” (n = 10) prior experience with VR. All participants (n = 32) reported at least “some” prior experience with video game controllers and half of the participants (n = 16) reported “frequent” current use of electronic games (including mobile phone games).

2.3. Task

Participants completed a virtual grab-and-place task (Figure 4), in which they had to place virtual cubes, one at a time, onto a target area on a virtual table (Figure 3). In 30 trials per interface (B_Leap, HHI_Leap, Oculus), 10 cubes of 3 different sizes (small: 3 cm; medium: 6 cm; large: 10 cm) were presented in random order. In each trial, the cube appeared randomly at one of 10 positions at 30 cm from the target (Figure 5). Participants used their dominant hand to place each cube as quickly and accurately as possible onto a target, which consisted of a 2D square the size and shape of one face of the cube (Figure 2). After grabbing the cube, the participant had one chance to place it on the target; they could not adjust the cube once it made contact with the table.

2.4. Measures

2.4.1. Performance Measures

  • Accuracy: Euclidean distance, in meters, from the 2D center of the bottom face of the cube to the center of the target square.
  • Total time per trial: Time from cube spawn (appearance on the table) until the time the cube made contact with the table after having been picked up; equal to the sum of the following two time measures (grab and release time).
  • Grab time (time to grab): Time from when the cube appeared on the table to the time it was grabbed.
  • Release time (time from grab to placement): Time from when the cube was grabbed to the time the cube made contact with the table after being released.
  • Accidental drops: Prematurely terminated trials due to mistakenly dropping the cube (for details see below); used for both cleaning the data and as additional outcome measure to quantify interface performance.

2.4.2. Subjective Experience Measures

  • System Usability Scale (SUS): A “quick and dirty” [21] questionnaire to assess the usability (primarily “ease of use”) of any product (e.g., websites, cell phones, kitchen appliances). It contains 10 items with 5-point Likert-scale response options from 1 (strongly disagree) to 5 (strongly agree). Responses are transformed by a scoring rubric, resulting in a score out of 100.
  • Single subjective questions: 8 questions assessing user experience (i.e., comfort, ease of gripping, likelihood to recommend to friends) with Likert-scale response options ranging from 1 (strongly disagree) to 5 (strongly agree).
  • Agency: The feeling of control over and connectedness to (a part of) one’s own body or a representation thereof [22] was measured with the question “I felt like I controlled the virtual representation of the hand as if it was part of my own body” [16]. Response options ranged from 1 (strongly disagree) to 7 (strongly agree).
  • Overall satisfaction on a Kunin-scale: Response options ranged from 1 (least satisfied) to 7 (most satisfied) with smiley faces representing degree of overall satisfaction [23] (Figure A1).
  • Overall preference: After the participants completed all 3 interfaces, they answered the question “Of the three interfaces you used, which did you like best?” It was left up to the participants to define “best” for themselves. This was not meant to be a single definitive data point to gauge overall subjective preference, but one measure among others (including overall satisfaction and the SUS).

2.5. Data Cleaning/Pre-Processing

Accidental drops were analyzed separately from the calculation of the other performance metrics (accuracy, total time, grab time and release time). Therefore, trials that were flagged as accidental drops were removed from the set of trials analyzed for performance.
A trial was considered an accidental drop if it fulfilled at least one of three criteria:
  • the experimenter noted that the participant accidentally dropped the cube before getting a chance to place it on the target,
  • release time below 0.5 s and accuracy above 10 cm,
  • accuracy above 20 cm.
Trials with accidental drops (8%: 233 of 2880 trials in total) were removed from the analysis (199 according to criterion 1, of which 168 also fulfilled criterion 2 or 3; 34 fulfilled criterion 2 or 3 but not criterion 1). Hence, 2647 trials entered the performance analyses.

2.6. Analysis

For each metric, a one-way repeated-measures analysis of variance (ANOVA) with the three-level factor “interface” (B_Leap, HHI_Leap, Oculus) was conducted. For all statistical tests, a two-sided alpha of 0.05 was used to determine significance. In case of significance in the ANOVAs, paired t-tests were conducted between the HHI_Leap and B_Leap and between the HHI_Leap and Oculus (as the HHI_Leap was designed to improve upon the B_Leap, and we had no research questions regarding the differences between the B_Leap and Oculus, we decided to defer on this analysis). Multiple-comparison correction (for the two comparisons) was performed using the Holm–Bonferoni method [24]. For measures of central tendency, we used means and 95% confidence intervals. To enable quantitative (parametric) summary and inferential statistics, we deliberately added qualitative labels (i.e., strongly agree, strongly disagree) to only the extreme values of the scales. Despite some controversy, parametric tests can be a viable method for analyzing Likert-scale data, even when they violate assumptions (i.e., normal distribution) for those analyses [25,26,27].
The analysis was conducted in R (v. 1.2.5) using the following packages: readr v. 1.3.1, readxl v. 1.3.1, tidyverse v. 1.3.0, dplyr v. 0.8.3, ggplot2 v. 3.2.1, see v. 0.3.0, cowplot v. 1.0.0, lsr v. 0.5, ggpubr v. 0.2.4, car v. 3.0–84 and rstatix v. 0.3.0. The R code can be found in the Supplementary Materials.
In the plots, asterisks indicate (Holm-adjusted) p-values: * = p < 0.05, ** = p < 0.01, *** = p < 0.001.

3. Results

We conducted repeated-measures ANOVAs to analyze differences between the interfaces on each measure. In case of statistical significance, pairwise comparisons were used to determine the origin of the effect (see Section 2.6: Analysis for details). The results for all performance measures are presented first, followed by subjective measures.

3.1. Performance Measures

There were significant main effects of interface for all performance measures (Table 1), which were driven by the Oculus controller performing significantly better than the HHI_Leap on all performance metrics (Table 2). No significant differences were found between the HHI_Leap and B_Leap for accuracy or total time per trial. The grab time with the HHI_Leap was significantly longer than with the B_Leap by 0.22 s on average (Figure 6). Release time with the HHI_Leap was significantly shorter than with the B_Leap by 0.27 s on average (Figure 7). With the HHI_Leap, significantly fewer accidental drop errors occurred than with the B_Leap, with a difference of 2.34 drops on average (Figure 8).

3.2. Subjective Measures

3.2.1. System Usability Scale (Sus)

There was a significant main effect of interface for SUS scores (Table 1). This effect was driven by the Oculus controller as the paired-samples t-test comparing the HHI_Leap showed that their SUS scores were not significantly different (Table 2, Figure 9). Scoring guidelines for the SUS [28] put both Leap interfaces at the border (score = 70) of “acceptable” and “marginal.” SUS scores for the HHI_Leap were significantly lower than for the Oculus controller (Table 2).

3.2.2. 5-Point Likert Scale Questionnaire Items

ANOVA found significant main effects for Interface on ratings of comfort, precision, gripping, releasing and likelihood to recommend (Table 1). These effects were driven by differences between the HHI_Leap and Oculus, as post-hoc paired-samples t-tests showed no significant difference between the HHI_Leap and the B_Leap (Table 2, Figure 10). All 5-point Likert question ratings were lower for the HHI_Leap than for the Oculus controller, and paired-samples t-tests showed that ratings were significantly lower for the HHI_Leap than the Oculus on comfort, gripping, releasing, precision and likelihood to recommend (Table 2, Figure 10).

3.2.3. Agency

ANOVA found no significant effect of interface on ratings of Agency (Table 1, Figure 11).

3.2.4. Overall Satisfaction

ANOVA found a significant main effect of interface on ratings of Overall Satisfaction (Table 1). Post-hoc paired-samples t-tests showed a significant difference between the Oculus and HHI_Leap (Table 2, Figure 12).

3.2.5. Overall Preference

Out of the 32 participants, 18 selected the Oculus controller as their overall preferred interface, 7 selected the HHI_Leap and 7 selected the B_Leap.

4. Discussion

4.1. Comparison of Hand Tracking to the Traditional Controller

For a grab-and-place task in immersive VR, we systematically compared a prototype of a camera-based hand tracking interface optimized for this use case, to the standard (non-optimized) Leap Motion API, and a traditional controller. We measured performance (accuracy, speed, errors) and subjective experience (e.g., comfort, naturalness, precision). The traditional controller outperformed hand tracking on all performance metrics. Subjective ratings were significantly higher for the traditional controller for ease of use (as measured by the SUS), gripping, releasing, precision, comfort, likelihood to recommend, and overall satisfaction. For the remaining subjective dimensions, there were no significant differences between the traditional controller and hand tracking.
Because hand tracking enables more naturalistic interaction gestures than the traditional controller, we hypothesized participants would rate hand tracking higher on naturalness and intuitiveness. The absence of significant differences on these ratings was surprising. This could be because of unfulfilled user expectations about the capabilities of hand tracking (see [29], and [30] for the case involving voice interfaces). Technological limitations with mapping the virtual to the real hand and the responsiveness of the Leap Motion system may have lowered feelings of naturalness [17] as well as the related dimension of agency: even small differences between the user’s intended hand gesture and how the virtual hand actually behaves can lower the sense of agency [22,31,32].
The combination of naturalistic gestures, sub-optimal responsiveness and the lack of haptic feedback may have placed hand tracking in its current state (as represented by our prototype) in the dip of the U-shaped curve described by McMahan et al. [13]. This close-but-not-quite-there representation of reality in VR may create an uncanny valley effect (originally discussed in the context of lifelike robots [33,34]), in which the similarity to reality results in the VR experience feeling unsettling. Consistent with this possibility, Argelaguet et al. [16] found that less realistic virtual hand avatars created higher feelings of agency.
The haptic element is important: One study found that combining the Leap Motion with a haptic feedback device can create stronger feelings of presence and immersion than the Leap Motion on its own [1]. Though the Oculus controller did not map haptic feedback to interaction with virtual objects in our study, it is itself a physical object, with buttons to press to engage in interaction. Participants may have responded positively to this tactile sensation when using the traditional controller to interact with virtual objects.

4.2. Comparison of Our Hand Tracking Prototype (Hhi _Leap) to the Basic Leap Api (B _Leap)

Performance metrics showed that the HHI_Leap improved upon the B_Leap by reducing errors (accidental drops), while being slightly slower for grabbing (grab time), slightly faster for placing (release time), and not significantly different in accuracy or total time per trial. Increased grab time can be explained by the modification of the HHI_Leap, which required users to hover with the hand over the object before grabbing was possible (see Section 1.4 for a list of specifications to our prototype). Although the measure to prevent unintentional gripping led to an increase in grab time, the total time for the task remained the same, as the release time was correspondingly shorter. The additional features of HHI_Leap (compared to B_Leap) against unintentional gripping and unintentional dropping during fast movements may explain the reduced errors. Specifically, these features were the prevention of grab recognition when the hand was moving too fast and the prevention of rogue fingers from altering the trajectory of the cube after release. While subjective scores were slightly higher overall for the HHI_Leap on ease of use (as measured by the SUS, Figure 9) and individual questions (Figure 10), we found no significant differences between the HHI_Leap or the B_Leap on any subjective experience measure.

4.3. Limitations and Recommendations for Future Research

As our sample was relatively homogeneous (mostly young males, see Section 2.2), the results may not generalize-for example to particular populations that might use VR in clinical settings, such as older adults or patients. While previous VR experience among our sample was low, video game experience was high, which might have increased familiarity with video game controllers and prepared our participants for use of the Oculus controller. Familiarity generally increases affinity [35] and has been shown to increase user satisfaction and performance for interfaces [30].
The sample size was based on comparable within-subjects studies, which tested around 30 participants (e.g., [5,15,16]). Testing a larger sample could determine whether non-significant differences between conditions are a power issue or because of an absence of such differences.
In addition, users may have had expectations for hand tracking based on cultural phenomena, such as science-fiction films (e.g., Minority Report (2002)) [29], which may have been their only source of their mental models ([20,36] for such an interface. If expecting a highly fluid experience like in the movies, participants would be disappointed by the performance of the Leap Motion, which may have led to lower ratings. We recommend future research efforts to assess expectations and prior experience that may be a factor in observed usability.

5. Conclusions

Our study forms a basis for determining the state of the art of a flexible, common implementation of a hand tracking VR interface by comparing it to a traditional VR controller, on a task representative of the types of motions found in VR applications. The traditional controller provided better overall usability than the hand tracking interface, as evidenced by performance and subjective metrics. On individual questions, however, the only significant differences were on questions related to performance, and the hand tracking interface was rated as acceptable (within 1.1 standard deviations from the middle rating; see Table 2, Figure 10) on all subjective items except for those assessing performance. For tasks that involve grabbing and placing, hand tracking can be a viable alternative, especially given the flexibility of the Leap API. Our modifications improved the performance of the Leap Motion, particularly by reducing accidental drops.
Our results do not support the hypothesis of higher naturalness for hand tracking in its current state over traditional controllers. However, as familiarity with hand tracking in the general population increases and technical issues are progressively overcome, this may change. An analogous technology may be touch interfaces, which were inferior to mouse pointing and clicking for many years, until very recently. Similarly, as hand tracking interfaces iteratively improve and become progressively more commonplace in daily life, they may come to surpass traditional controllers in user experience.

Supplementary Materials

Author Contributions

Conceptualization, P.C., D.R. and M.G.; Data curation, A.M. and M.L.; Formal analysis, A.M. and M.G.; Funding acquisition, P.C. and D.P.; Methodology, A.M., D.R. and M.G.; Project administration, P.C. and D.P.; Resources, D.P.; Software, M.L.; Supervision, P.C., D.R. and M.G.; Visualization, A.M.; Writing—original draft, A.M.; Writing—review & editing, A.M. and D.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fraunhofer Institute for Telecommunications.

Acknowledgments

We acknowledge the support of the study volunteers from the Fraunhofer Institute and beyond, without whom we would have no data. Thank you as well to all those external to the project who gave feedback on sections of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented reality
HMDHead mounted display
MDPIMultidisciplinary Digital Publishing Institute
MRMixed reality
VRVirtual reality

Appendix A. Questionnaire Items

Appendix A.1. English

Appendix A.1.1. 5-Point Likert Questions

Rate the following questions: 1 (strongly disagree) 2 3 4 5 (strongly agree):
  • Using this interface was comfortable
  • This interface was precise
  • This interface was intuitive
  • This interface was tiring for the hand (reverse scored)
  • The gripping of objects gave me a lot of trouble (reverse scored)
  • The releasing of objects gave me a lot of trouble (reverse scored)
  • The gripping and releasing of objects was very natural
  • I would recommend this interface to friends

Appendix A.1.2. 7-Point Likert Scale Questions

Rate the following questions: 1 (strongly disagree) 2 3 4 5 6 7 (strongly agree)
  • Agency: I felt like I controlled the virtual representation of the hand as if it was part of my own body.
  • Overall satisfaction: Which of the following faces best represents your overall satisfaction with using these interfaces? (Figure A1).
Figure A1. Face images accompanying the Kunin scale for the question assessing overall satisfaction.
Figure A1. Face images accompanying the Kunin scale for the question assessing overall satisfaction.
Mti 04 00091 g0a1

Appendix A.2. German

Appendix A.2.1. 5-Point Likert Questions

Bewerten Sie die Nutzung der eben getesteten Hand-Interaktionstechnologie 1 (stimme gar nicht zu) bis 5 (stimme sehr zu).
  • Die Benutzung dieser Interaktionstechnologie war komfortabel
  • Diese Interaktionstechnologie war präzise
  • Die Interaktionstechnologie war intuitiv
  • Diese Interaktionstechnologie war für die Hand ermüdend
  • Das Greifen der Objekte bereitete mir große Mühe
  • Das Loslassen der Objekte bereitete mir große Mühe
  • Das Greifen und Loslassen der Objekte war sehr natürlich
  • Ich würde diese Interaktionstechnologie Freunde empfehlen

Appendix A.2.2. 7-Point Likert Scale Questions

Bitte wählen Sie eine der folgenden Antworten: 1 (stimme gar nicht zu) 2 3 4 5 6 7 (stimme sehr zu)
  • Agency: Ich hatte das Gefühl, die virtuelle Darstellung der Hand so zu steuern, als ob sie Teil meines eigenen Körpers wäre.
  • Overall satisfaction: Welches Gesicht entspricht am ehesten Ihre Gesamtzufriedenheit mit der Nutzung dieser Interaktionstechnologie? (Figure A1).

References

  1. Kim, M.; Jeon, C.; Kim, J. A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality. Sensors 2017, 17, 1141. [Google Scholar] [CrossRef] [Green Version]
  2. Slater, M. Grand Challenges in Virtual Environments. Front. Robot. AI 2014, 1, 1–4. [Google Scholar] [CrossRef] [Green Version]
  3. Rizzo, A.; Koenig, S. Is clinical virtual reality ready for primetime? Neuropsychology 2017, 31, 877–899. [Google Scholar] [CrossRef] [Green Version]
  4. Kim, H.; Choi, Y. Performance comparison of user interface devices for controlling mining software in virtual reality environments. Appl. Sci. (Switzerland) 2019, 9, 2584. [Google Scholar] [CrossRef] [Green Version]
  5. Geiger, A.; Bewersdorf, I.; Brandenburg, E.; Stark, R. Visual feedback for grasping in virtual reality environments for an interface to instruct digital human models. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2018; Volume 607, pp. 228–239. [Google Scholar] [CrossRef]
  6. Park, J.L.; Dudchenko, P.A.; Donaldson, D.I. Navigation in real-world environments: New opportunities afforded by advances in mobile brain imaging. Front. Hum. Neurosci. 2018, 12, 1–12. [Google Scholar] [CrossRef]
  7. Hofmann, S.; Klotzsche, F.; Mariola, A.; Nikulin, V.; Villringer, A.; Gaebler, M. Decoding Subjective Emotional Arousal during a Naturalistic VR Experience from EEG Using LSTMs. In Proceedings of the 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Taichung, Taiwan, 10–12 December 2018; pp. 128–131. [Google Scholar]
  8. Tromp, J.; Peeters, D.; Meyer, A.S.; Hagoort, P. The combined use of virtual reality and EEG to study language processing in naturalistic environments. Behav. Res. Methods 2018, 50, 862–869. [Google Scholar] [CrossRef] [Green Version]
  9. Parsons, T.D. Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences. Front. Hum. Neurosci. 2015, 9, 1–9. [Google Scholar] [CrossRef] [Green Version]
  10. Belger, J.; Krohn, S.; Finke, C.; Tromp, J.; Klotzche, F.; Villringer, A.; Gaebler, M.; Chojecki, P.; Quinque, E.; Thöne-Otto, A. Immersive Virtual Reality for the Assessment and Training of Spatial Memory: Feasibility in Neurological Patients. In Proceedings of the 2019 International Conference on Virtual Rehabilitation (ICVR), Tel Aviv, Israel, 21–24 July 2019; pp. 21–24. [Google Scholar]
  11. Massetti, T.; da Silva, T.D.; Crocetta, T.B.; Guarnieri, R.; de Freitas, B.L.; Bianchi Lopes, P. The Clinical Utility of Virtual Reality in Neurorehabilitation: A Systematic Review. J. Cent. Nerv. Syst. Dis. 2018, 10. [Google Scholar] [CrossRef]
  12. Pedroli, E.; Greci, L.; Colombo, D.; Serino, S.; Cipresso, P.; Arlati, S.; Mondellini, M.; Boilini, L.; Giussani, V.; Goulene, K.; et al. Characteristics, Usability, and Users Experience of a System Combining Cognitive and Physical Therapy in a Virtual Environment: Positive Bike. Sensors 2018, 18, 2343. [Google Scholar] [CrossRef] [Green Version]
  13. McMahan, R.P.; Lai, C.; Pal, S.K. Interaction Fidelity: The Uncanny Valley of Virtual Reality Interactions. In Virtual, Augmented and Mixed Reality; Lackey, S., Shumaker, R., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 59–70. [Google Scholar] [CrossRef]
  14. Weichert, F.; Bachmann, D.; Rudak, B.; Fisseler, D. Analysis of the accuracy and robustness of the leap motion controller. Sensors 2013, 13, 6380–6393. [Google Scholar] [CrossRef] [Green Version]
  15. Vosinakis, S.; Koutsabasis, P. Evaluation of visual feedback techniques for virtual grasping with bare hands using Leap Motion and Oculus Rift. Virtual Real. 2018, 22, 47–62. [Google Scholar] [CrossRef]
  16. Argelaguet, F.; Hoyet, L.; Trico, M.; Lécuyer, A. The role of interaction in virtual embodiment: Effects of the virtual hand representation. In Proceedings of the 2016 IEEE Virtual Reality (VR), Greenville, SC, USA, 19–23 March 2016. [Google Scholar]
  17. Wozniak, P.; Vauderwange, O.; Mandal, A.; Javahiraly, N.; Curticapean, D. Possible applications of the LEAP motion controller for more interactive simulated experiments in augmented or virtual reality. Opt. Educ. Outreach IV 2016, 9946, 2016. [Google Scholar]
  18. Zhang, Z. Microsoft kinect sensor and its effect. IEEE Multimed. 2012, 19, 4–10. [Google Scholar] [CrossRef] [Green Version]
  19. Benda, B.; Esmaeili, S.; Ragan, E.D. Determining Detection Thresholds for Fixed Positional Offsets for Virtual Hand Remapping in Virtual Reality. Available online: https://www.cise.ufl.edu/~eragan/papers/Benda_ISMAR2020.pdf (accessed on 29 November 2020).
  20. Norman, D. The Design of Everyday Things; Basic Books: New York, NY, USA, 2013. [Google Scholar]
  21. Brooke, J. SUS-a quick and dirty usability scale. In Usability Evaluation in Industry; Jordan, P.T.B.M.I., Weerdmeester, B.A., Eds.; Taylor and Francis: London, UK, 1996; pp. 189–194. [Google Scholar]
  22. Caspar, E.A.; Cleeremans, A.; Haggard, P. The relationship between human agency and embodiment. Conscious. Cogn. 2015, 33, 226–236. [Google Scholar] [CrossRef] [Green Version]
  23. KUNIN, T. The Construction of a New Type of Attitude Measure1. Pers. Psychol. 1955, 8, 65–77. [Google Scholar] [CrossRef]
  24. Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 1979, 6, 65–70. [Google Scholar]
  25. Norman, G. Likert scales, levels of measurement and the “laws” of statistics. Adv. Health Sci. Educ. 2010, 15, 625–632. [Google Scholar] [CrossRef]
  26. Carifio, J.; Perla, R. Resolving the 50-Year Debate around Using and Misusing Likert Scales. Med. Educ. 2008, 42, 1150–1152. [Google Scholar] [CrossRef]
  27. Meek, G.; Ozgur, C.; Dunning, K. Comparison of the t vs. Wilcoxon Signed-Rank test for likert scale data and small samples. J. Appl. Stat. Methods 2007, 6, 91–106. [Google Scholar] [CrossRef]
  28. Bangor, A.; Kortum, P.; Miller, J. Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
  29. Spurlock, J.; Ravasz, J. Hand Tracking: Designing a New Input Modality. Presented at the Oculus Connect Conference in San Jose, CA, USA; Available online: https://www.youtube.com/watch?v=or5M01Pcy5U (accessed on 13 January 2020).
  30. Myers, C.M.; Furqan, A.; Zhu, J. The impact of user characteristics and preferences on performance with an unfamiliar voice user interface. In Proceedings of the Conference on Human Factors in Computing Systems-Proceedings (Paper 47), Glasgow, Scotland, UK, 4–9 May 2019; pp. 1–9. [Google Scholar]
  31. Krugwasser, R.; Harel, E.V.; Salomon, R. The boundaries of the self: The sense of agency across different sensorimotor aspects. J. Vis. 2019, 19, 1–11. [Google Scholar] [CrossRef] [Green Version]
  32. Haggard, P.; Tsakiris, M. The experience of agency: Feelings, judgments, and responsibility. Curr. Dir. Psychol. Sci. 2009, 18, 242–246. [Google Scholar] [CrossRef]
  33. Mathur, M.B.; Reichling, D.B. Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition 2016, 146, 22–32. [Google Scholar] [CrossRef] [Green Version]
  34. Mori, M. The uncanny valley. IEEE Robot. Autom. 2012, 19, 98–100. [Google Scholar] [CrossRef]
  35. Zajonc, R. Mere Exposure: A Gateway to the Subliminal. Curr. Dir. Psychol. Sci. 2001, 10, 224–228. [Google Scholar] [CrossRef]
  36. Corry, M.D. Mental models and hypermedia user interface design. In AACE Educational Technology Review; Spring: Berlin/Heidelberg, Germany, 1998; pp. 20–24. [Google Scholar]
Figure 1. Differences in hand representations between Basic Leap (B_Leap-AC) and HHI Modified Leap (HHI_Leap-DF). A/D: Open hand, not within grabbing range. B/E: Open hand, within grabbing range. C/F: Gripping hand.
Figure 1. Differences in hand representations between Basic Leap (B_Leap-AC) and HHI Modified Leap (HHI_Leap-DF). A/D: Open hand, not within grabbing range. B/E: Open hand, within grabbing range. C/F: Gripping hand.
Mti 04 00091 g001
Figure 2. Virtual representations of the hands for (A) basic Leap Motion (B_Leap), (B) modified Leap Motion (HHI_Leap), and (C) Oculus Touch handheld controller (Oculus); (D) mounted Leap Motion sensor; (E) Oculus Touch handheld controller.
Figure 2. Virtual representations of the hands for (A) basic Leap Motion (B_Leap), (B) modified Leap Motion (HHI_Leap), and (C) Oculus Touch handheld controller (Oculus); (D) mounted Leap Motion sensor; (E) Oculus Touch handheld controller.
Mti 04 00091 g002
Figure 3. Study design: after a training/practice round (left column), the task (middle column) and questionnaires were completed (right panel). Interfaces (order randomized across participants): B_Leap = basic Leap Motion, HHI_Leap = modified Leap Motion, Oculus = Oculus Touch handheld controller.
Figure 3. Study design: after a training/practice round (left column), the task (middle column) and questionnaires were completed (right panel). Interfaces (order randomized across participants): B_Leap = basic Leap Motion, HHI_Leap = modified Leap Motion, Oculus = Oculus Touch handheld controller.
Mti 04 00091 g003
Figure 4. Participant engaging in task (front view of one of the Leap Motion trials).
Figure 4. Participant engaging in task (front view of one of the Leap Motion trials).
Mti 04 00091 g004
Figure 5. Diagram of target and cube spawn positions: cubes spawned (appeared) at one of 10 positions located 30 cm from the target.
Figure 5. Diagram of target and cube spawn positions: cubes spawned (appeared) at one of 10 positions located 30 cm from the target.
Mti 04 00091 g005
Figure 6. Grab time by interface, with means and 95% confidence intervals, individual participant times (colored dots), smoothed density distributions and results of paired-samples t-tests. * = p < 0.05, *** = p < 0.001.
Figure 6. Grab time by interface, with means and 95% confidence intervals, individual participant times (colored dots), smoothed density distributions and results of paired-samples t-tests. * = p < 0.05, *** = p < 0.001.
Mti 04 00091 g006
Figure 7. Release time by interface, with means and 95% confidence intervals, individual participant times (colored dots), smoothed density distributions and the results of paired-samples t-tests. * = p < 0.05, *** = p < 0.001.
Figure 7. Release time by interface, with means and 95% confidence intervals, individual participant times (colored dots), smoothed density distributions and the results of paired-samples t-tests. * = p < 0.05, *** = p < 0.001.
Mti 04 00091 g007
Figure 8. Accidental drops by interface, with means and 95% confidence intervals, individual participant counts (colored dots), smoothed density distributions and the results of paired-samples t-tests. *** = p < 0.001.
Figure 8. Accidental drops by interface, with means and 95% confidence intervals, individual participant counts (colored dots), smoothed density distributions and the results of paired-samples t-tests. *** = p < 0.001.
Mti 04 00091 g008
Figure 9. Usability scores (System Usability Scale, SUS) by interface, with means and 95% confidence intervals, individual participant scores (colored dots), smoothed density distributions, results of paired-samples t-tests and scoring categories [28]. * = p < 0.05.
Figure 9. Usability scores (System Usability Scale, SUS) by interface, with means and 95% confidence intervals, individual participant scores (colored dots), smoothed density distributions, results of paired-samples t-tests and scoring categories [28]. * = p < 0.05.
Mti 04 00091 g009
Figure 10. Mean scores for subjective questions, converted to a scale of −2 to 2, with 95% CI. * = p < 0.05, *** = p < 0.001. All significant differences are between Oculus and HHI_Leap.
Figure 10. Mean scores for subjective questions, converted to a scale of −2 to 2, with 95% CI. * = p < 0.05, *** = p < 0.001. All significant differences are between Oculus and HHI_Leap.
Mti 04 00091 g010
Figure 11. Mean scores for Agency, converted to a scale of −3 to 3, with 95% CI.
Figure 11. Mean scores for Agency, converted to a scale of −3 to 3, with 95% CI.
Mti 04 00091 g011
Figure 12. Mean scores for Overall Satisfaction, converted to a scale of −3 to 3, with 95% CI. * = p < 0.05. Significant difference is between Oculus and HHI_Leap (see Table 2).
Figure 12. Mean scores for Overall Satisfaction, converted to a scale of −3 to 3, with 95% CI. * = p < 0.05. Significant difference is between Oculus and HHI_Leap (see Table 2).
Mti 04 00091 g012
Table 1. Results of one-way repeated-measures ANOVAs comparing performance measures between interfaces (B_Leap, HHI_Leap, Oculus). For post-hoc comparisons, see Table 2.
Table 1. Results of one-way repeated-measures ANOVAs comparing performance measures between interfaces (B_Leap, HHI_Leap, Oculus). For post-hoc comparisons, see Table 2.
Performance Measures
Measuredf_ndf_dFp η 2 _ p
Accuracy262 41.711 p < 0.001 0.574
Total Time262 36.619 p < 0.001 0.542
Grab Time262 29.057 p < 0.001 0.484
Release Time262 30.342 p < 0.001 0.495
Accidental Drops262 44.383 p < 0.001 0.589
Subjective Measures
Measuredf_ndf_dFp η 2 _ p
SUS262 7.132 0.002 0.187
Comfortable262 7.911 p < 0.001 0.203
Precise262 27.921 p < 0.001 0.474
Intuitive262 0.198 0.821 0.006
Tiring262 0.012 0.989 0.0004
Gripping262 23.420 p < 0.001 0.430
Releasing262 25.571 p < 0.001 0.452
Natural262 1.608 0.209 0.049
Recommend262 6.556 0.003 0.175
Agency262 1.602 0.21 0.049
Satisfaction262 5.901 0.005 0.160
Table 2. Results of post-hoc paired t-tests, comparing performance and subjective measures for HHI_Leap and Oculus (top) as well as HHI_Leap and B_Leap (bottom).
Table 2. Results of post-hoc paired t-tests, comparing performance and subjective measures for HHI_Leap and Oculus (top) as well as HHI_Leap and B_Leap (bottom).
Performance MeasuresHHI_LeapOculus
MeasureMSDMSDtdfp(Holm)
Accuracy (m)0.01580.00810.00710.00387.126831p < 0.001
Total Time (s)3.51451.34652.27050.7537.884231p < 0.001
Grab Time (s)1.5730.65220.94620.24796.939331p < 0.001
Release Time (s)1.94160.85611.32430.57456.701331p < 0.001
Accidental Drops (#)2.40622.09240.1250.3366.242731p < 0.001
Subjective MeasuresHHI_LeapOculus
MeasureMSDMSDtdfp(Holm)
SUS70.468818.841882.343814.7416−2.6887310.022
Comfortable3.3751.14.1560.92−3.0886310.008
Precise2.5940.8754.1560.92−6.468931p < 0.001
Gripping3.0941.1744.4061.073−5.213331p < 0.001
Releasing2.7811.0994.4061.073−6.139331p < 0.001
Recommend3.3751.2384.0940.928−2.6592310.025
Satisfaction51.0475.5940.875−2.4615310.039
Performance MeasuresHHI_LeapB_Leap
MeasureMSDMSDtdfp(Holm)
Accuracy (m)0.01580.00810.01540.00520.2934310.771
Total Time (s)3.51451.34653.56611.5687−0.2926310.772
Grab Time (s)1.5730.65221.35430.40972.2529310.032
Release Time (s)1.94160.85612.21181.2365−2.3171310.027
Accidental Drops (#)2.40622.09244.752.6761−3.862731p < 0.001
Subjective MeasuresHHI_LeapB_Leap
MeasureMSDMSDtdfp(Holm)
SUS70.468818.841869.921918.8418−0.1871310.853
Comfortable3.3751.13.4380.982−0.3117310.757
Precise2.5940.8752.4381.0760.5958310.556
Gripping3.0941.1742.8751.1290.9088310.37
Releasing2.7811.0992.5941.2140.641310.526
Recommend3.3751.2383.3441.0960.1664310.869
Satisfaction51.0474.8441.0810.776310.444
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Masurovsky, A.; Chojecki, P.; Runde, D.; Lafci, M.; Przewozny, D.; Gaebler, M. Controller-Free Hand Tracking for Grab-and-Place Tasks in Immersive Virtual Reality: Design Elements and Their Empirical Study. Multimodal Technol. Interact. 2020, 4, 91. https://0-doi-org.brum.beds.ac.uk/10.3390/mti4040091

AMA Style

Masurovsky A, Chojecki P, Runde D, Lafci M, Przewozny D, Gaebler M. Controller-Free Hand Tracking for Grab-and-Place Tasks in Immersive Virtual Reality: Design Elements and Their Empirical Study. Multimodal Technologies and Interaction. 2020; 4(4):91. https://0-doi-org.brum.beds.ac.uk/10.3390/mti4040091

Chicago/Turabian Style

Masurovsky, Alexander, Paul Chojecki, Detlef Runde, Mustafa Lafci, David Przewozny, and Michael Gaebler. 2020. "Controller-Free Hand Tracking for Grab-and-Place Tasks in Immersive Virtual Reality: Design Elements and Their Empirical Study" Multimodal Technologies and Interaction 4, no. 4: 91. https://0-doi-org.brum.beds.ac.uk/10.3390/mti4040091

Article Metrics

Back to TopTop