Next Article in Journal
Agroecological Transformation: Implementation of an Agroforestry System in a Construction Debris Area Focusing on Vegetables Development through Microbial Treatments
Next Article in Special Issue
Impact of Visual Disturbances on the Trend Changes of COP Displacement Courses Using Stock Exchange Indices
Previous Article in Journal
Intelligent Transportation System Technologies, Challenges and Security
Previous Article in Special Issue
High-Functioning Autism and Virtual Reality Applications: A Scoping Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual and Haptic Guidance for Enhancing Target Search Performance in Dual-Task Settings

1
College of Design, National Taipei University of Technology, Taipei 10608, Taiwan
2
School of Design Arts, Xiamen University of Technology, Xiamen 361024, China
*
Author to whom correspondence should be addressed.
Submission received: 19 April 2024 / Revised: 21 May 2024 / Accepted: 23 May 2024 / Published: 28 May 2024
(This article belongs to the Special Issue Human–Computer Interaction and Virtual Environments)

Abstract

:
In complex environments, users frequently need to manage multiple tasks simultaneously, which poses significant challenges for user interface design. For instance, when driving, users must maintain continuous visual attention on the road ahead while also monitoring rearview mirrors and performing shoulder checks. These multitasking scenarios present substantial design challenges in effectively guiding users. To address these challenges, we focus on investigating and designing visual and haptic guidance systems to augment users’ performance. We initially propose the use of visual guidance, specifically employing a dynamic arrow as a guidance technique. Our evaluation shows that dynamic arrows significantly expedite both reaction and selection times. We further introduce and evaluate haptic feedback, which users perceive as more salient than visual guidance, leading to quicker responses when switching from primary to secondary tasks. This allows users to maintain visual attention on the primary task while simultaneously responding effectively to haptic cues. Our findings suggest that multimodal guidance, especially haptic guidance, can enhance both reaction time and user experience in dual-task environments, offering promising practical implications and guidelines for designing more user-friendly interfaces and systems.

1. Introduction

In complex everyday environments, users often find themselves managing multiple tasks simultaneously. For example, in traffic scenarios, pedestrians, cyclists, and drivers navigating a bustling cityscape must stay aware of their immediate surroundings while also being vigilant of potential hazards from behind. This multitasking demand extends to industrial contexts, where workers operating heavy machinery need to manage controls while maintaining situational awareness to avoid accidents. The complexity of such scenarios intensifies in critical professional settings such as firefighting, where individuals must stay alert to their surrounding environment while advancing toward a fire.
In these environments, guiding users’ attention to improve multitasking performance is crucial for safety. A considerable body of research exists that provides diverse strategies for improving user performance. For instance, the role of attention cues and information awareness on visual displays, such as Head-Mounted Displays (HMDs) or mobile screens, has been extensively explored across various fields, including the medical [1], industrial [2], and tourism sectors [3]. Numerous visual guidance designs have been investigated to help users, such as arrows [2,4,5], halos [6], attention funnels [7], and wedges [8].
In addition to visual guidance, other modalities, such as haptic feedback, have been extensively applied in various user scenarios, including digital gaming [9], stress relief [10], and traffic navigation [11,12]. Haptic feedback can deliver realistic and immersive user experiences with wearable devices [13,14,15,16] and provide guidance or warning signals through wearable devices like belts and vests [17], enhancing traffic awareness and directional sense.
In scenarios like traffic or industry, where users need to stay aware of their environment and actively search for targets, users’ perception and cognitive load are intensely demanded [18]. The Multiple Resource Theory suggests that dual tasks drawing upon the same sensory modality could potentially interfere with each other more than tasks requiring different resources [19]. Thus, employing multiple modalities (e.g., visual and haptic feedback) can be advantageous in complex dual-task settings. Experimental evidence supports that multimodal interaction can improve users’ perception of surroundings and operation accuracy [20].
In dual-task settings, haptic feedback has proven to be a valuable interaction modality, shown to enhance user performance. For instance, haptic displays can increase situational awareness without affecting the primary task in traffic environments [21]. Haptics may also help maintain visual attention on the road, while allowing for reliable dashboard control in driving scenarios [22]. Experiments have also showed that vibrotactile displays can effectively guide users to targets without impacting their performance on other concurrent tasks [23].
Augmented visual signs and haptic input modalities are well-known interface designs; however, their effectiveness in enhancing target search performance in dual-task settings still needs further investigation. Given the complexities and challenges of managing multiple tasks in dynamic user scenarios, the research objective of this paper was to investigate the potential of visual and haptic guidance in enhancing the target search performance within dual-task settings. We address the challenge of efficiently guiding users’ attention in environments where multiple tasks must be managed. Through two user studies, we explored the efficacy of visual and haptic feedback to understand their impact on user responsiveness and preference. In both studies, participants engaged in a primary task of searching for and selecting a target within a 5 × 5 grid in front of them, and they were guided to a secondary task of searching for and selecting a target within a 2 × 5 grid behind them. When a target appeared in the secondary task behind the users, a visual arrow or haptic vibration feedback would simultaneously appear, pointing in the direction of the target to prompt the user to search for it behind them.
Our research methodology used user studies to evaluate the effectiveness of various design factors of visual and haptic feedback. The first study evaluated the effectiveness of dynamic visual arrows compared to static arrows, considering factors such as guidance continuity and target location. The second study introduced haptic feedback as an additional modality and compared its effectiveness to visual guidance. Our findings demonstrate the following:
  • Dynamic visual arrows, as opposed to static ones, significantly improve both reaction time and selection time in target search tasks.
  • Haptic feedback is introduced as a more salient guidance technique compared to visual cues. It is shown to facilitate quicker response times, enabling users to maintain visual attention on primary tasks while managing secondary tasks through haptic cues.
This research enriches the existing body of knowledge on visual–haptic interactions by offering empirical evidence that underscores the efficacy of these modalities in improving target search performance. Additionally, it provides insights for the design of user-friendly interfaces and systems, which are crucial for supporting multitasking in complex environments.

2. Related Works

A substantial corpus of research has been dedicated to investigating visual–haptic interactions, with a specific focus on enhancing the target search performance across various devices and user scenarios. These concerted efforts have laid a robust foundation for the development of visual and haptic guidance systems intended for target searching within dual-task settings. This section aims to delve succinctly into the existing body of related works.

2.1. Visual Displays for Attention Cue and Information Awareness

Visual guidance can provide consistent assistance to users based on their current location and direction across various digital devices, such as HMDs or mobile screens. This form of user attention guidance can be applied in a wide range of scenarios. For instance, in medical surgery and training, AR visual cues could direct a doctor’s attention [1,24]. Visual guidance techniques are also employed in manual assembly assistance systems for target cueing [2]. In tourism navigation information systems, location searching on a digital map is a common task, and visual guidance could assist users in locating nearby points of interest [3].
Searching for and locating an out-of-view object is a common task for augmented reality (AR) applications on mobile devices and HMDs. Several visual guidance techniques, including halos [6], attention funnels [7], and wedges [8], have been employed to aid users. Among these techniques, arrow guidance can prove to be a straightforward and effective method used to guide user attention [2,4,5]. Arrow guidance typically entails the direction from the user’s line of sight toward the position of the out-of-view object. As arrows are familiar to users, they offer high usability and low workload [4]. Arrow cues also exhibit the greatest overall benefit in target searching tasks [5]. In supporting manual assembly assistance systems in target search guidance tasks, arrow guidance has demonstrated the fastest guiding performance [2]. While other dynamic visual effects, like adding background flickering, result in a slow user performance and induce more eye fixation on the arrow rather than searching for the target [2].
Arrow guidance has been utilized across various computing platforms. For example, 3D arrows have been proven as effective visual cues in mobile AR applications in terms of the target searching performance, workload, and user experience [25]. Arrows are also effective in mobile tourism applications, enabling users to estimate the position of objects more precisely [3]. Furthermore, 3D arrows have proven to be efficient and accurate in densely populated mobile 3D environments, compared to other techniques such as 3DWedge+ and Halo3D [26]. On HMDs, arrow guidance leads to a faster localization, higher accuracy, lower workload, and slight decrease in physiological stress [27]. It also facilitates the faster recognition of simultaneous events compared to exocentric viewpoint guidance on visual attention [27]. In virtual reality (VR) training environments, arrow guidance aids the guidance and restoration performance [28].
However, researchers and designers have noted that the presence of cuing to aid target detection for expected targets could divert attention away from unexpected targets in tasks such as piloting an unmanned air vehicles [29], as well as target searching tasks such as in endoscopic navigation exercises [30]. Since shifting attention toward compelling guidance information could have detrimental effects, designers should use strategies to mitigate attention costs. Furthermore, research suggests that visual information placement should avoid the visual center to minimize interference with the visual scene, thereby achieving an optimal visual search [31].

2.2. Haptic Displays for Attention Guidance and Information Awareness

Haptic interaction has a broad range of applications across many user scenarios. For instance, the integration of mobile haptic feedback can enhance the realism and immersion of user experiences with wearable devices such as gloves [13], armbands [14], or wristbands [15,16] in virtual reality or augmented reality applications. The parameters of dynamic tactile feedback, such as speed, position, direction, length, thickness, and intensity, can be controlled [32,33] and implemented in areas such as tactile chairs for digital gaming [9] or interactive storytelling [34,35]. Beyond the hands and body, foot-tickling actuators have been designed to induce laughter for stress relief [10]. To accurately understand vibrotactile perception across different parts of the human body, a spatial map of functional ranges of inter-motor distances has been proposed for constructing body-worn vibrotactile displays [36].
Another important application of haptic feedback is to provide guidance or warning signals via wearable devices such as belts and vests [17]. For example, haptic vests and belts are used in automobile and motorcycle navigation systems [11,12] to improve the awareness of traffic and the sense of direction. In addition to the belt and vest, target location can be informed through haptic devices on various parts of the body, such as headbands [37,38] and hats [39,40]. Similarly, wearable vibrotactile feedback provided through the legs of clinicians can support alarm-state vital sign identification that is competitive with graphical and auditory alarm display conditions [41].
In addition to the commonly applied tactile feedback through wearable belts or vests, haptic feedback could also be provided through controlled electrostatic stimulation [42,43] or mid-air ultrasonic transducers [44,45], as well as airflow [46,47] and liquid flow [48].

2.3. Multimodal and Visual–Haptic Displays Design

Navigating through a target search task within a visually complex and cluttered scene can be challenging due to the dense information space. Strategically deploying multiple modalities could enhance user awareness and task performance. Existing research indicates that integrating multiple sensory channels, such as visual and tactile feedback, can improve users’ perception of their surroundings and enhance operational accuracy [20]. When detecting moving objects, multimodal feedback can increase target awareness, with tactile feedback proving particularly useful [49].
Multimodal interactions have also been extensively studied in tasks such as visual search and positioning, particularly involving audio and visual modalities [50,51,52,53]. These interactions not only enhance the precision of positioning tasks but also ensure that the user’s visual field remains unaffected during high-precision operations such as surgery [54,55].
Interestingly, cross-modality can induce sensory interaction effects. For instance, the perceived direction of auditory motion can be influenced by visual motion [56]. Consequently, in practical applications, designers must consider potential sensory cognitive conflicts arising from cross-modality interactions, such as the competition between auditory and tactile senses [57,58], or auditory and visual senses [59].
In addition to sensory interaction effects, visual and haptic feedback can play unique roles in enhancing target selection guidance. For example, users could leverage haptic feedback to determine the direction of a target within a potential area and then utilize visual information to pinpoint the target [60]. Similarly, a two-step guidance feedback strategy could employ auditory cues to initially provide a vertical location, followed by indicating the horizontal direction of the target [61].
The burgeoning body of research on multimodal feedback, particularly studies involving visual and haptic modalities, has provided valuable insights on enhancing user performance in complex environments (as summarized in Table 1). However, a more nuanced understanding is needed when these modalities are deployed in dual-task target search scenarios. Moreover, the design and interplay between different modalities during multi-tasking remains under-explored, such as the effectiveness of dynamic visual guidance and haptic guidance for different tasks. Accordingly, there is a pressing need to deepen our understanding of visual and tactile multimodal feedback in the context of multi-task target search scenarios. This stream of research not only advances theoretical understanding but also promises practical implications for designing more user-friendly interfaces and systems.

3. Study 1: Visual Guidance Design and Evaluation

The first study focused on visual guidance employed within a dual-task target searching environment. This investigation encompassed several key aspects, including interaction design, user evaluation, and the subsequent experimental results.

3.1. Study Design

We report the experimental evaluation details to scrutinize key design factors for visual guidance, which include arrow type, visual guidance continuity, and target location, as well as the specifics of the system implementation and experimental procedure.

3.1.1. Visual Guidance Interaction Design

Visual guidance is a consistent and reliable tool that assists users across various digital devices such as HMDs and mobile screens. Numerous visual guidance techniques are employed to aid users in a variety of tasks. Among these techniques, the use of arrows has proven to be a straightforward and effective method for directing user attention [2]. Studies have demonstrated the efficacy of arrows as visual cues in various settings, including finding out-of-view objects [25], cueing targets in manual assembly assistance systems [2], performing 180-degree visual search tasks [5], and searching for locations on a digital map [3,28].
Therefore, in our first study, we opted to utilize the arrow as the visual guidance technique. Given that presenting the cue at the center of the screen could lead to information clutter [5], we positioned the arrow at a 20-degree angle from the current gaze to prevent it from obstructing the user’s view in our cluttered target search and selection environments. The arrow appeared simultaneously as a target emerged behind the user, pointing in the direction of the target to guide the user to search for it behind them. To simplify target selection and avoid introducing more complex selection operations, we implemented the gaze-and-dwell selection confirmation technique. Users were merely required to maintain their gaze on the target for one second to affirm their selection.

3.1.2. Experimental Settings

We used a Microsoft Hololens 2 as the experimental platform. It has see-through stereo displays with head-gaze tracking. The experimental applications were programmed using Unity 2019.4 and Mixed Reality Toolkit (MRTK) 2.6. The application was built and deployed using Microsoft Visual Studio 2019 version 16.9.4 on a Windows 10 PC. The experimental settings and target layout are presented in Figure 1a.

3.1.3. Independent Variables

There were two user tasks, and each task had four independent variables. For the primary task, the independent variables were as follows:
  • Arrow Type (2 levels): Static Arrow and Dynamic Arrow;
  • Visual Guidance Continuity (2 levels): Transient (disappear after the user turns 30° away from the forward direction) and Persistent (continues to display until the user correctly selects the target behind them);
  • Target Row (5 levels): 1, 2, 3, 4, 5;
  • Target Column (5 levels): 1, 2, 3, 4, 5.
Similarly, the independent variables for the secondary task were as follows:
  • Arrow Type (2 levels): Static Arrow and Dynamic Arrow;
  • Visual Guidance Continuity (2 levels): Transient and Persistent;
  • Target Row (5 levels): 1, 2;
  • Target Column (5 levels): 1, 2, 3, 4, 5.

3.1.4. Experimental Design

This study employed a repeated measures within-participants design. Four primary sessions were conducted, each corresponding to a different combination of Arrow Type (Static Arrow and Dynamic Arrow, as detailed in Figure 1b) and Visual Guidance Continuity (Transient and Persistent). The sequence of these primary sessions was randomized, with a five-minute intermission introduced between each session.
In each session, all targets for both primary and secondary tasks were presented once. Each session encompassed 5 practice trials (3 for primary tasks and 2 for secondary tasks) and 35 test trials (25 for primary tasks and 10 for secondary tasks). Therefore, each participant executed a comprehensive total of 160 trials.

3.1.5. Participants and Procedure

Twelve participants (five males and seven females) were recruited from the campus. Their mean age was 24.9 ( s d = 7.65 ) . All participants had experience with Three-Dimensional (3D) gaming, and five participants had VR experiences. During the experiment, participants sat in a swivel chair and were asked to select the target (letter “P”) by staring at the target while remaining relaxed and comfortable. Similar to a previous work [60], we used letters to present the target. The target (letter “P”) could appear either in front of the participant (Primary Task) or behind them (Secondary Task) in a matrix of the letter “B”.
Each letter was displayed in white font on a dark blue square button. We implemented the gaze-and-dwell selection confirmation technique; therefore, users could merely maintain their gaze on the target for one second to affirm their selection. As a user directed their gaze toward the button, the background color transitioned to light blue. A dark blue circle then began expanding from the button’s center, signifying that the selection dwell was underway. Upon reaching full expansion after one second, the target was selected.
In the Primary Task, participants were required to select the target letter “P” positioned in front of them. A total of 25 buttons were presented 1.35 m away from the user, each button measuring 20 cm in width and separated by 2.5 cm, covering a total area of 1.1 m in both height and width. The Secondary Task involved selecting a target situated behind the user, with ten buttons evenly spaced at a distance of 1.35 m from the user. Each button was 20 cm wide and 42.5 cm apart, spanning a total area of 2.7 m in width and 82 cm in height.
The target letter “P” could appear behind the participant (Secondary Task) after a random sequence of 2 to 4 target selections in front (Primary Task). At the moment that the target letter “P” appeared behind the user, a light blue arrow, 5 cm in height and positioned 1.1 m away, indicated the direction of the target to the participant. Considering that visual information placement should avoid the gazing center to minimize the interference with the visual scene [31], the arrow was placed at a 20-degree angle from the current gazing direction. In the Static Arrow condition, the arrow remained stationary; while in the Dynamic Arrow condition, it oscillated back and forth with an amplitude of 20 cm at a frequency of 4 Hz Figure 1a. In the Persistent Arrow condition, the arrow followed the participant’s head movements until the correct selection was made. However, in the Transient Arrow condition, the arrow initially followed the participant’s head movements but disappeared if the angle between the gaze direction and the forward direction exceeded 30 degrees.
Upon finalizing the experiment, participants were asked to complete a survey assessing their preferences for different combinations of Arrow Type and Visual Guidance Continuity. Preferences were rated on a scale from 1 (indicating strong dislike) to 9 (indicating strong preference). Participants were also encouraged to provide additional qualitative feedback on their perceptions of the various interaction designs through comments and discussions. The entire experiment was about 50 min.

3.2. Results

Performance metrics such as selection time, error rate, and user behavioral data including head rotation and head movement, were recorded to assess user performance. For the Secondary Task, the reaction time of users was also recorded. A repeated-measure analysis of variance (ANOVA) for Arrow Type × Visual Guidance Continuity × Row × Column was used to analyze the user performance measurements, including selection time, reaction time, head rotation, and head movement. As users successfully selected all targets during the experiment, no analysis was performed on error rate.

3.2.1. Selection Time and User Behavior Data in Primary Task of Study 1

Selection Time The definition of selection time varied, based on the nature of the preceding selection task. If the previous selection task was the same Primary Task, with the target in front of the participant, we defined the selection time as the interval from when the target (denoted by the letter “P”) appeared to the successful completion of the selection. Alternatively, if the preceding selection task was a Secondary Task, with the target positioned behind the participant, we needed to adjust the start point of the selection time as users needed time to turn their head forward again. We took into account that the human central visual field’s typical span of a 60° diameter [62], and all targets of the Primary Task were within this region. Therefore, we measured the selection time from when the participant adjusted their head to a direction 30 degrees from the forward direction up to the successful execution of the selection.
Main effects were found for Arrow ( F 1 , 11 = 7.78 , p < 0.05 ) , Row ( F 2.23 , 24.51 = 11.72 , p < 0.001 ) , and Column ( F 4 , 44 = 15.25 , p < 0.001 ) . (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown.) The mean selection times across Arrow Type, Visual Guidance Continuity, Row, and Column are illustrated in Figure 2a.
Post hoc Bonferroni pairwise comparisons showed that the selection time with the Dynamic Arrow (2.73 s) was significantly faster than that with the Static Arrow (2.87 s) ( p < 0.05 ) . The selection time of row 1 (3.05 s) was significantly longer than those of row 2 (2.58 s) ( p < 0.001 ) , row 3 (2.65 s) ( p < 0.01 ) , and row 4 (2.62 s) ( p < 0.05 ) ; row 5 (3.08 s) was significantly longer than row 2 (2.58 s) ( p < 0.05 ) , row 3 (2.65 s), ( p < 0.01 ) and row 4 (2.62 s) ( p < 0.01 ) .
The selection time of column 2 (2.65 s) was significantly faster than that of column 1 (2.94 s) ( p < 0.05 ) and column 5 (3.13 s) ( p < 0.001 ) ; column 3 (2.54 s) was significantly faster than column 1 (2.94 s) ( p < 0.05 ) , column 4 (2.73 s) ( p < 0.01 ) , and column 5 (3.13 s) ( p < 0.001 ) ; column 4 (2.73 s) was significantly faster than column 5 (3.13 s) ( p < 0.05 ) .
Head Rotation Angle We recorded the total head rotation angle during the selection time. Main effects were found for Row ( F 2.37 , 26.05 = 8.88 , p < 0.01 ) and Column ( F 4 , 44 = 7.75 , p < 0.001 ) . (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown.) No other main or interaction effect was found. The mean head rotation angles across Arrow Type, Visual Guidance Continuity, Row, and Column are illustrated in Figure 2b. Post hoc Bonferroni pairwise comparisons showed that the head rotation angle of row 1 (83.80 degrees) was significantly longer than those of row 2 (62.99 degrees) ( p < 0.001 ) and row 3 (68.06 degrees) ( p < 0.01 ) , and that row 5 (84.36 degrees) was significantly longer than row 3 (68.06 degrees) ( p < 0.01 ) and row 4 (68.48 degrees) ( p < 0.05 ) .
Head Movement Distance We recorded the total head movement distance during the selection time. Main effects were found for Row ( F 4 , 44 = 9.19 , p < 0.001 ) and Column ( F 4 , 44 = 4.71 , p < 0.05 ) . The mean head total movement distances across Arrow Type, Visual Guidance Continuity, Row, and Column are illustrated in Figure 2c. Post hoc Bonferroni pairwise comparisons showed that the distance of row 1 (17.20 cm) was significantly longer than those of row 2 (13.40 cm) ( p < 0.001 ) , row 3 (14.16 cm) ( p < 0.01 ) , and row 4 (14.13 cm) ( p < 0.05 ) , and row 5 (16.72 cm) was significantly longer than row 4 (14.13 cm) ( p < 0.05 ) . The distance of column 5 (16.82 cm) was significantly longer than those of column 2 (14.31 cm) and column 3 (14.12 cm) ( p < 0.05 ) .
We also recorded the head movement distance along the X, Y, and Z dimensions. For the X dimension, main effects were found for Row ( F 2.27 , 24.97 = 8.53 , p < 0.005 ) and Column ( F 4 , 44 = 6.11 , p < 0.005 ) . Post hoc Bonferroni pairwise comparisons showed that distance of row 1 (9.63 cm) was significantly longer than those of row 2 (7.23 cm) ( p < 0.001 ) , row 3 (8.01 cm) ( p < 0.05 ) , and row 4 (8.11 cm) ( p < 0.05 ) ; row 5 (9.57 cm) was significantly longer than row 4 (8.11 cm) ( p < 0.05 ) . The distance of column 5 (9.72 cm) was significantly longer than those of column 2 (7.82 cm) ( p < 0.01 ) and column 3 (7.87 cm) ( p < 0.01 ) .
For the Y dimension, main effects were found for Row ( F 2.33 , 25.67 = 11.33 , p < 0.001 ) and Column ( F 2.26 , 24.82 = 9.16 , p < 0.005 ) . Post hoc Bonferroni pairwise comparisons showed that the distance of row 1 (6.32 cm) was significantly longer than those of row 2 (4.93 cm) ( p < 0.01 ) , row 3 (5.22 cm) ( p < 0.05 ) , and row 4 (5.19 cm) ( p < 0.05 ) . The distance for row 5 was the longest (6.49 cm) and was significantly longer than those of row 2 (4.93 cm) ( p < 0.01 ) , row 3 (5.22 cm) ( p < 0.01 ) , and row 4 (5.19 cm) ( p < 0.001 ) . And the distance of column 5 (6.63 cm) was significantly longer than those of column 2 (5.16 cm) ( p < 0.01 ) , column 3 (5.05 cm) ( p < 0.01 ) , and column 4 (5.37 cm) ( p < 0.05 ) .
For the Z dimension, main effects were found for Row ( F 4 , 44 = 6.42 , p < 0.001 ) . Post hoc Bonferroni pairwise comparisons showed that the distance of row 1 (9.21 cm) was significantly longer than those of row 2 (7.40 cm) ( p < 0.05 ) , row 3 (7.43 cm) ( p < 0.001 ) , and row 4 (7.23 cm) ( p < 0.05 ) .
The mean head movement distances across the Feedback Modality, Row, and Column are illustrated in Figure 2.

3.2.2. Reaction Time, Selection Time, and User Behavior Data in Secondary Task of Study 1

Reaction Time For the secondary task, we collected data on the participants’ reaction times. Considering that the human central visual field typically spans a 60° diameter and is capable of detecting major contrasts, colors, and motion [62], and all the targets in Primary Task were located inside this area, we measured the reaction time of the Secondary Task from the instant that the arrow was displayed to the point when the participant’s head movement exceeded a 30-degree angle from the forward direction.
Main effects were found for Arrow Type ( F 1 , 11 = 16.03 , p < 0.01 ) and Column ( F 4 , 44 = 6.40 , p < 0.001 ) . An interaction effect was found for Row × Column ( F 2.52 , 27.76 = 5.59 , p < 0.01 ) . (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown.) No other main effect or interaction effect was found. The mean reaction times across Arrow Type, Visual Guidance Continuity, Row, and Column are illustrated in Figure 3a.
Post hoc Bonferroni pairwise comparisons showed that the reaction time for the Dynamic Arrow (4.66 s) was significantly faster than that for the Static Arrow (5.87 s) ( p < 0.01 ) , and column 5 (4.71 s) was significantly faster than column 3 (5.09 s).
Selection Time For the Secondary Task, we recorded the selection time from the point when the participant’s head movement exceeded a 30-degree angle from the forward direction to the point when a successful selection was made. Main effects were found for Arrow Type ( F 1 , 11 = 14.77 , p < 0.05 ) and Column ( F 4 , 44 = 6.73 , p < 0.001 ) . No other main effect or interaction effect was found. The mean selection times across Arrow Type, Visual Guidance Continuity, Row, and Column are illustrated in Figure 3b. Post hoc Bonferroni pairwise comparisons showed that the reaction time for the Dynamic Arrow (3.32 s) was significantly faster than that for the Static Arrow (3.67 s) ( p < 0.01 ) , and column 2 (3.16 s) was significantly faster than column 3 (3.89 s).
Head Rotation Angle We recorded the total head rotation angle during the selection time. Main effects were found for Arrow Type ( F 1 , 11 = 5.41 , p < 0.05 ) and Column ( F 4 , 44 = 8.94 , p < 0.001 ) . No other main effect or interaction effect was found. The mean reaction times across Arrow Type, Visual Guidance Continuity, Row, and Column are illustrated in Figure 3c.
Post hoc Bonferroni pairwise comparisons showed that the head rotation angle for the Dynamic Arrow (141.06 degrees) was significantly less than that for the Static Arrow (152.26 degrees) ( p < 0.01 ) , and column 2 (134.87 degrees), column 4 (145.41 degrees), and column 5 (140.71 degrees) were significantly less than column 3 (170.04 s).
Head Movement Distance We recorded the total head movement distance during the selection time. A main effect was found for Column ( F 1.98 , 21.78 = 4.00 , p < 0.05 ) , (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown.) No other main effect or interaction effect was found for the Modality and Row. Post hoc Bonferroni pairwise comparisons showed that the head movement distance of column 2 (23.59 cm) was significantly shorter than that for column 3 (28.04 cm) ( p < 0.01 ) .
We also recorded the head movement distance along the X, Y, and Z dimensions. For the X dimension, a main effect was found for Column ( F 2.30 , 25.30 = 4.00 , p < 0.05 ) . (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown.) No other main effect or interaction effect was found for the Modality and Row. Post hoc Bonferroni pairwise comparisons showed that the head movement distance along the X dimension of column 2 (13.71 cm) was significantly shorter than those of column 3 (16.96 cm) ( p < 0.01 ) and column 1 (15.12 cm) ( p < 0.05 ) .
For the Y dimension, a main effect was found for Column ( F 4 , 44 = 10.28 , p < 0.001 ) . Post hoc Bonferroni pairwise comparisons showed that the distance along the Y dimension of column 3 (7.25 cm) was significantly longer than those of column 1 (5.70 cm) ( p < 0.05 ) , column 2 (5.51 cm) ( p < 0.01 ) , and column 5 (5.73 cm) ( p < 0.05 ) .
For the Z dimension, no main effect or interaction effect was found. The mean head movement distances across the Feedback Modality, Row, and Column are illustrated in Figure 3.

3.2.3. User Preference

After the test, the participants needed to complete a questionnaire about their preference and comments. A one-way repeated-measures ANOVA across a combination of Arrow Type and Visual Guidance Continuity found a significant effect on user preference ( F 1.61 , 17.68 = 7.86 , p < 0.01 ) . Post hoc Bonferroni pairwise comparisons showed that the preference of a Dynamic Persistent Arrow (7.58) was significantly higher than for other techniques ( p < 0.05 ) . And one-way repeated-measure ANOVA revealed no significant main effects for the TLX, Mental, Physical, Frustration, Performance, Effort, and Temporal factors in the analysis of the workload using the NASA TLX, as shown in Figure 4b.
Feedback from participants aligned with our preference findings. They reported that the Static Arrow was less noticeable than the Dynamic Arrow. The Dynamic Arrow’s visual effect was reportedly encouraging, prompting users to expedite the target location task, thereby inducing a sense of temporal pressure. This, in turn, motivated users to locate the target more swiftly. Consequently, the Dynamic Arrow was deemed more noticeable, and its guidance more efficient and fluent. However, the Dynamic Arrow’s visual effect could also induce anxiety if users were unable to locate the target swiftly, and the continuous movement could potentially lead to visual fatigue.
Participants had diverse opinions about the Transient and Persistent Arrow. Some reported that the absence of a persistent guide required more effort and time to accurately locate the target. These users found locating the target easier with the Persistent Arrow. Despite providing initial guidance, some participants felt the Persistent Arrow offered no additional help in locating the target. When users were highly focused on the task, the Transient Arrow was deemed more efficient as it allowed users to concentrate on the target location independently, often resulting in faster results than following the arrow. However, for tired users or those seeking a more relaxed experience, the Persistent Arrow was recommended for its guiding ability. On the downside, the Persistent Arrow’s drifting nature could be distracting during target search and selection. In contrast, the Transient Arrow only provided directional guidance without hovering around the target, which was deemed sufficient.

3.2.4. Result Summary

The use of dynamic arrows resulted in a notable decrease in reaction times while participants were switching from the Primary Task to the Secondary Task. The average reaction time was reduced by 20.61%. The use of Dynamic Arrows also reduced the target selection speed for both the Primary Task (by 4.87%) and the Secondary Task (by 9.54%). However, the use of dynamic visual feedback did not reduce the workload, according to the NASA TLX questionnaire.
These results highlight the strength of dynamic visual feedback in improving reaction times during task switching. The dynamic visual effect could enhance reaction speed, which is critical in multitasking environments. However, the lack of improvement in cognitive load suggests that visual guidance alone may not be sufficient in more complex or demanding scenarios. Thus, we employed additional forms of sensory feedback, such as haptic cues, in our second study to create a more balanced and effective guidance system.

4. Study 2: Visual–Haptic Guidance Design and Evaluation

This section introduces haptic feedback design and evaluation combined with the visual guidance. We designed and implemented a wearable haptic feedback device for the user’s upper body and conducted the second experimental evaluation to investigate the haptic feedback design to improve the target searching task.

4.1. Study Design

Consistent with Study 1, we continued to use the Microsoft Hololens 2 with the same development tools. For the wearable haptic feedback device, we employed the bHaptics TactSuit X40, which comprises 40 tactile feedback units distributed around the upper body (20 units on the chest and 20 on the back).

4.1.1. Haptic Feedback Design for Target Searching Task

Haptic feedback has demonstrated its usefulness across a range of user scenarios, including localizing out-of-view objects [21,49,63], guiding search tasks [60], enhancing entertainment experiences [34,35], and aiding in car driving [17]. In dual-task environments, digital haptic feedback has been shown to improve the user performance in complex driving dual-tasks settings [22] and to promote faster responses in visual target searching [57].
Recognizing that haptic feedback may assist in maintaining or even enhancing visual attention in dual-task settings [22], we conducted an evaluation that incorporated wearable haptic feedback, building on our evaluation of visual guidance in Study 1. In accordance with the results of Study 1, we employed the Dynamic Arrow design as the visual guidance technique, comparing it with the haptic feedback design and a combination of both visual and haptic feedback guidance methods.
Previous studies suggest that multiple tactile units can be perceived as a single area by the user [32,33]. As such, we were able to map the target direction behind the user on the user’s back using a haptic vest device equipped with numerous tactile units, as depicted in Figure 5b. When a target appeared behind the users in the Secondary Task, haptic feedback would simultaneously activate in the direction of the target, prompting the user to search for it behind them. In a visually demanding dual-task setting, introducing this additional interaction modality could be beneficial for reducing cognitive load [22,57].

4.1.2. Experimental Settings

In the conducted experiment, a haptic direction guidance was created utilizing the two columns of tactile units located on the left and right sides of the posterior region of the bHaptics TactSuit X40, as depicted in Figure 5b. All the tactile units in the chest region were deactivated; hence, the selection of the target and the corresponding haptic feedback were localized solely on the back of the user.
The haptic device was operated through an application that was built using Unity 2019.4 and the bHaptics Haptic Plugin. This haptic control application was deployed on a Windows 11 desktop PC. As the target was displayed behind the user on the Hololens, the direction information of the target was published using the MQTT (Message Queuing Telemetry Transport) messaging protocol (https://mqtt.org/ accessed date 22 May 2024). The MQTT server was established on Tencent Cloud (https://cloud.tencent.com/product/lighthouse accessed date 22 May 2024) and operated with EMQX V4.0.4 (https://www.emqx.io accessed date 22 May 2024) on a centOS 7 server. The application on Hololens 2 published a message to the MQTT server, while a target appeared behind the user in the Secondary Task. The haptic control application subscribed to the direction guidance message triggered the corresponding haptic feedback while receiving the message, as shown in Figure 5c. The frequency of the haptic feedback was set to four pulses per second based on a previous study [64], and each pulse lasting 100 ms.

4.1.3. Independent Variables

In Study 2, we employed the settings that were determined to optimize the user experience based on the evaluation results from Study 1. A Dynamic Arrow appearing transiently was used as visual feedback, compared against haptic feedback and the combination of visual–haptic feedback.
For the Primary Task, the independent variables were as follows:
  • Feedback Modality (3 levels): Visual Feedback, Haptic Feedback, and Visual–Haptic Feedback;
  • Target Row (5 levels): 1, 2, 3, 4, 5;
  • Target Column (5 levels): 1, 2, 3, 4, 5.
Similarly, the independent variables for the Secondary Task were as follows:
  • Feedback Modality (3 levels): Visual Feedback, Haptic Feedback, and Visual–Haptic Feedback;
  • Target Row (5 levels): 1, 2;
  • Target Column (5 levels): 1, 2, 3, 4, 5.

4.1.4. Experimental Design

Study 2 also utilized a repeated measure within-participant design. There were three sessions, each corresponding to a different type of Guidance Feedback Type: visual feedback, haptic feedback, and visual–haptic feedback. The sequence of the sessions was counterbalanced, with a respite of 5 min provided between each session. Within each session, the experimental design was the same as that of Study 1: every target position in Primary and Secondary Tasks was presented once in a randomized order, resulting in a total of 120 trials completed by each participant.

4.1.5. Participants and Procedure

A group of twelve participants (comprising five males and seven females) was enlisted from the university campus, with a mean age of 25.5 ( s d = 7.33 ) . All participants were familiar with 3D gaming, and four of them had prior VR experiences. The participants wore the bHaptics TactSuit X40, and the experimental procedure replicated that of Study 1. The entire experimental procedure spanned approximately 50 min.

4.2. Results

Similarly to Study 1, we also recorded the reaction time, selection time, error rate, and user behavior data such as head rotation and head movement to evaluate the user performance. A repeated-measure analysis of variance (ANOVA) for Feedback × Row × Column was used to analyze the user performance measurements. In the experiment, users successfully selected all targets so no analysis for error rate was performed. All user performance measurement definitions were the same as in Study 1.

4.2.1. Selection Time and User Behavior Data in Primary Task of Study 2

Selection Time Main effects were found for Row ( F 2.09 , 23.03 = 9.31 , p < 0.001 ) and Column ( F 4 , 44 = 19.62 , p < 0.001 ) . (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown). No interaction effect was found. The mean selection times across Feedback Modality, Row, and Column are illustrated in Figure 6a. Post hoc Bonferroni pairwise comparisons showed that the selection time of row 5 (3.13 s) was significantly slower than those of row 2 (2.60 s) ( p < 0.01 ) , row 3 (2.51 s) ( p < 0.01 ) , and row 4 (2.72 s) ( p < 0.05 ) .
The selection time of column 1 (3.01 s) was significantly slower than those of column 2 (2.54 s) ( p < 0.01 ) , column 3 (2.59 s) ( p < 0.05 ) , and column 4 (2.60 s) ( p < 0.001 ) ; column 5 (3.06 s) was significantly slower than column 2 (2.54 s) ( p < 0.001 ) , column 3 (2.59 s) ( p < 0.01 ) , and column 4 (2.60 s) ( p < 0.001 ) .
Head Rotation Angle Main effects were found for Row ( F 2.12 , 23.33 = 5.34 , p < 0.05 ) and Column ( F 4 , 44 = 8.58 , p < 0.001 ) . (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown). No interaction effect was found. The mean angles across Feedback Modality, Row, and Column are illustrated in Figure 6b.
Post hoc Bonferroni pairwise comparisons showed that the head rotation angles of row 2 (66.40 degrees) and row 3 (61.37 degrees) were significantly smaller than that of row 5 (84.92 degrees) ( p < 0.05 ) , and that the head ration angles of column 2 (64.31 degrees), column 3 (67.79 degrees), and column 4 (64.49 degrees) were significantly smaller than that of column 5 (82.95 degrees) ( p < 0.05 ) .
Head Movement Distance Main effects were found for Row ( F 4 , 44 = 5.21 , p < 0.05 ) and Column ( F 4 , 44 = 6.65 , p < 0.001 ) . Post hoc Bonferroni pairwise comparisons showed that the total head movement distance of row 5 (15.69 cm) was significantly longer than that of row 3 (11.76 cm) ( p < 0.05 ) , and that column 1 (15.02 cm) and column 5 (15.43 cm) were significantly longer than column 4 (12.41 cm) ( p < 0.05 ) .
We also recorded the head movement distance along the X, Y, and Z dimensions. For the X dimension, main effects were found for Row ( F 4 , 44 = 3.22 , p < 0.05 ) and Column ( F 4 , 44 = 9.04 , p < 0.001 ) ; post hoc Bonferroni pairwise comparisons showed that the distance along the X dimensions of column 1 (8.69 cm) and column 5 (8.90 cm) were significantly longer than those of column 2 (6.69 cm) and column 4 (6.93 cm) ( p < 0.05 ) .
For the Y dimension, main effects were found for Row ( F 4 , 44 = 8.66 , p < 0.001 ) and Column ( F 2.06 , 22.62 = 8.47 , p < 0.001 ) . (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown.)Post hoc Bonferroni pairwise comparisons showed that the head movement distance along the Y dimension of row 5 (7.06 cm) was significantly longer than those of row 2 (5.17 cm), row 3 (5.06 cm), and row 4 (5.74 cm) ( p < 0.05 ) , and that the head movement distance along the Y dimension of column 5 (6.61 cm) was significantly longer than those of column 2 (5.27 cm), column 3 (5.30 cm), and column 4 (5.23 s) ( p < 0.05 ) , and the distance along the Y dimension of column 1 (6.38 cm) was significantly longer than that of column 4 (5.23 cm) ( p < 0.05 ) .
For the Z dimension, a main effect was found for Row ( F 4 , 44 = 3.79 , p < 0.05 ) . Post hoc Bonferroni pairwise comparisons showed that the head movement distance along the Z dimension of row 5 (7.44 cm) was significantly longer than that of row 3 (5.71 cm) ( p < 0.05 ) .
The mean head movement distances across Feedback Modality, Row, and Column are illustrated in Figure 6.

4.2.2. Reaction Time, Selection Time, and User Behavior Data in Secondary Task of Study 2

Reaction Time A main effect was found for Feedback Modality ( F 2 , 22 = 20.80 , p < 0.001 ) . No other main effect or interaction effect was found for Row and Column. The mean reaction times across Feedback Modality, Row, and Column are illustrated in Figure 7a. Post hoc Bonferroni pairwise comparisons showed that the reaction times of Haptic Feedback (4.15 s) and Visual–Haptic Feedback (3.86 s) were significantly faster than that of Visual Feedback (5.24 s) ( p < 0.05 ) .
Selection Time A main effect was found for Column ( F 2.46 , 27.05 = 15.56 , p < 0.001 ) . (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown.) No other main effect or interaction effect was found for Feedback Modality and Row. The mean selection times across Feedback Modality, Row, and Column are illustrated in Figure 7b. Post hoc Bonferroni pairwise comparisons showed that the selection times of column 1 (3.12 s), column 2 (3.10 s), column 4 (3.00 s), and column 5 (3.25 s) were significantly faster than that of column 3 (4.11 s) ( p < 0.05 ) .
Head Rotation Angle A main effect was found for Column ( F 4 , 44 = 29.97 , p < 0.001 ) . No other main effect or interaction effect was found for Modality and Row. The mean head rotation angles across Feedback Modality, Row, and Column are illustrated in Figure 7c. Post hoc Bonferroni pairwise comparisons showed that the angles of column 1 (133.85 degrees), column 2 (140.24 degrees), column 4 (143.79 degrees), and column 5 (139.82 degrees) were significantly smaller than that of column 3 (193.91 degrees) ( p < 0.001 ) .
Head Movement Distance A main effect was found for Column ( F 4 , 44 = 14.70 , p < 0.001 ) . No other main effect or interaction effect was found for the Modality and Row. Post hoc Bonferroni pairwise comparisons showed that the head movement distance of column 3 (31.12 cm) was significantly longer than those of column 1 (22.75 cm), column 2 (23.16 cm), column 4 (22.01 cm), and column 5 (22.85 cm) ( p < 0.05 ) .
We also recorded the head movement distance along the X, Y, and Z dimensions. For the X dimension, a main effect was found for Column ( F 4 , 44 = 9.97 , p < 0.001 ) . No other main effect or interaction effect was found for Feedback Modality and Row. Post hoc Bonferroni pairwise comparisons showed that the distance along the X dimension of column 3 (17.48 cm) was significantly longer than those of column 1 (12.53 cm), column 2 (12.51 cm), column 4 (12.10 cm), and column 5 (12.71 cm) ( p < 0.05 ) .
For the Y dimension, a main effect was found for Column ( F 2.09 , 22.93 = 16.48 , p < 0.001 ) . (The sphericity assumption was not met so the Greenhouse–Geisser correction was applied; the corrected degrees of freedom are shown.) Post hoc Bonferroni pairwise comparisons showed that the distance along the Y dimension of row 2 (7.63 cm) was significantly longer than that of row 1 (6.46 cm) ( p < 0.05 ) , and that the distance along thee Y dimension of column 3 (9.83 cm) was significantly longer than those of column 1 (6.37 cm), column 2 (6.70 cm), column 4 (6.11 cm), and column 5 (6.21 cm) ( p < 0.05 ) .
For the Z dimension, a main effect was found for Column ( F 4 , 44 = 9.06 , p < 0.001 ) . Post hoc Bonferroni pairwise comparisons showed that the distance along the Z dimension of column 3 (17.42 cm) was significantly longer than those of column 1 (13.92 cm), column 2 (14.28 cm), column 4 (13.26 cm), and column 5 (13.94 cm) ( p < 0.05 ) .
The mean head movement distances across Feedback Modality, Row, and Column are illustrated in Figure 7.

4.2.3. User Preference

After the test, the participants needed to complete a questionnaire about their preference and comments. A one-way repeated-measure ANOVA across the guidance feedback types found a significant effect on preference ( F 2 , 22 = 22.17 , p < 0.001 ) . Post hoc Bonferroni pairwise comparisons showed that the preferences of Haptic (7.25) and Visual–Haptic (7.75) were significantly higher than that of Visual (4.25) ( p < 0.01 ) , as shown in Figure 8a. A one-way repeated-measure ANOVA found significant main effects for workload using the NASA TLX ( F 2 , 22 = 4.90 , p < 0.05 ) , as shown in Figure 8b. Post hoc Bonferroni pairwise comparisons showed that the workload of Visual–Haptic (23.89) was significantly lower than that of Visual (34.31) ( p < 0.05 ) .
Based on user feedback, haptic feedback was found to elicit quicker responses compared to visual feedback, thereby reducing the time users spent in locating and following visual arrow indicators. Users reported that haptic feedback was direct and permitted rapid reactions with minimal cognitive load, thus reducing mental and visual demands. Users also reported being able to respond faster when relying solely on haptic feedback. Conversely, visual feedback had some drawbacks, including continuously shifting a user’s attention, obstructing their line of sight, and being impacted by a user’s visual attention and fatigue.
In our experiment, haptic feedback only provided left and right directional cues. According to participants, providing more precise target location information would be beneficial. The vibration area was reportedly large, leading to some users taking time to discern if the direction was left or right. Therefore, users expressed a preference for a reduced vibration area, which would make the determination of the vibration direction more accurate. Moreover, users generally favored stronger haptic feedback settings.

4.2.4. Results Summary

The reaction time with visual–haptic feedback (3.86 s) shows an average reduction in the reaction time of 26% when both visual and haptic signals were present. Subjective measures obtained using a NASA TLX questionnaire indicate a reduction in workload when both guidance modalities were employed. Similarly, participants also reported a significantly higher preference for visual–haptic feedback.
These findings underscore the practical implications of employing multimodal sensory guidance in complex multitasking environments, with a particular emphasis on haptic feedback. By leveraging both visual and haptic feedback, system designers can enhance task switching and overall user satisfaction. This approach not only optimizes the user experience but also contributes to safety and efficiency in critical operational contexts such as driving, industrial operations, and emergency response scenarios.

5. Discussion

In this section, we delve into the implications of guidance feedback design based on the experimental results obtained from our studies.

5.1. Effect of Visual Guidance

Interestingly, a previous study revealed that introducing a flickering dynamic visual effect to arrow guidance could potentially slow down the target search performance [2]. Conversely, our study revealed that the dynamic movement of an arrow oscillating back and forth significantly accelerated both reaction and selection times, compared to guidance provided by a Static Arrow, and it also reduced the angle of head rotation (as summarized in Table 2). The results affirm that the Dynamic Arrow could yield a quicker reaction time for the Secondary Task.
This improvement in reaction time could be attributable to the dynamic visual effect making it easier for users to notice the guidance, subsequently reducing response time. It was also easier to identify the target direction due to the arrow’s motion. The Dynamic Arrow further resulted in faster selection times and smaller head rotation angles in the Secondary Task. This could be credited to the more noticeable and clearer indications provided by the Dynamic Arrow, which allowed users to move their heads more efficiently and accurately.
In addition to enhancing reaction time for the out-of-view target search task behind the user (the Secondary Task), the Dynamic Arrow also improved selection times for the Primary Task, with the target positioned in front of the user. Given that no visual guidance is provided for the Primary Task, this improvement can be construed as a result of the overall task execution speed being faster, thus facilitating a smoother transition from the back to the front with Dynamic Arrow guidance.
In contrast, under the category of Visual Guidance Continuity, no significant difference in reaction time, selection time, or user head rotation and movement was noted between the Transient and Persistent Arrows in both the Primary and Secondary Tasks. This may be due to users’ ability to actively locate the target behind them after discerning its direction, thereby diminishing the need for arrow guidance. User feedback suggested that the continuous movement of the arrow could potentially impede visual task searching, which is an outcome that aligns with findings indicating that visual cues can be beneficial in complex tasks but might degrade performance in simpler ones [65]. The Secondary Task, which contains only 10 targets in a sparsely populated environment, makes target acquisition a relatively easy task. In this scenario, arrow guidance may not be a crucial component in enhancing user target location performance.

5.2. Effect of Haptic Feedback

In Study 2, haptic feedback enhanced the reaction time when switching from the Primary Task to the Secondary Task. Haptic feedback also improved the user experience and reduced the workload (as summarized in Table 3). Users reported that haptic feedback was more salient and easier to perceive than visual feedback, prompting immediate reactions when provided. The efficacy of haptic feedback can likely be ascribed to the considerable cognitive load imposed by the visual search in the Primary Task, which could potentially divert users’ attention away from the visual guidance arrow. Given that haptic feedback introduces a distinctly different sensory modality for direction reminders in the Secondary Task, users can respond more promptly to the haptic guidance.
Thus, haptic feedback could prove to be an invaluable interaction design resource in situations where users are extensively engaged in visually demanding tasks. Such scenarios often require users to concentrate their limited visual attention on a primary task in a specific direction, raising the possibility of them overlooking critical visual information from other directions. Instances of these scenarios include situations where individuals are walking, driving, cycling, or working in the field, with their visual attention primarily directed forward. In these circumstances, haptic guidance techniques could elicit quick responses, and thereby enhance safety.
In our experiment, haptic feedback served as guidance for users to effectively transition from the Primary Task to the Secondary Task, improving user reaction time without compromising the performance of the Primary Task. This discovery supports the concept that in dual-task settings, tasks requiring different sensory modalities could potentially outperform tasks that employ the same sensory modality [19]. Our findings align with those of Tivadar et al. [22], which demonstrated that haptic feedback in a secondary task, such as slide controlling, improved overall performance. Furthermore, although visual cues could potentially diminish performance in easy visual target acquisition tasks [65], our results suggest that haptic feedback could serve as an effective alternative in such cases.

5.3. Effect of Target Position

In the primary task, without any guidance for target selection, participants were required to independently select targets from a densely populated environment. As a result, their active search strategy usually began from the central area. Consequently, targets situated in the middle columns (i.e., columns 2, 3, and 4) were selected faster than those in the peripheral columns (i.e., columns 1 and 5). This pattern remained consistent in both Study 1 and Study 2. With regard to the various rows in the primary task, targets located in the middle rows were also selected more quickly than those at the top and bottom, suggesting that users tend to initiate their search from the center before expanding vertically.
Consistent with the selection time, head movements (i.e., the angle of head rotation and the distance of head movement) were shorter when users selected targets in the central area. These findings are in accordance with previous research on freehand selection, which found that users tend to position their hands and focus their vision around the center area when selecting a target in a grid-option layout [66].
In the Secondary Task, targets were located behind the user, and directional guidance was provided via visual and haptic feedback. This setup required users to turn either left or right following the guidance, making them aware of the potential target’s location behind them. Consequently, the Secondary Task fostered a distinct target search strategy compared to the Primary Task; users would initiate their search for the target from the sides and then move toward the center. As a result, targets in the columns on either side of the target area behind (i.e., columns 1 and 2 vs. 4 and 5) were identified faster than those in column 3 in the middle. Correspondingly, the middle column was also associated with larger head rotation angles and greater distances of head movement. The comparison of selection times and rotation angles for targets located in the side and center columns during the Primary and Secondary Tasks in Study 2 are summarized in Table 4.

5.4. Implications and Guidelines for Designers

Based on user performance data and feedback, we propose the following design guidelines for designers working on optimizing target searching tasks in multitask scenarios:
  • Our first study demonstrated that the dynamic effect of arrow visual guidance could expedite both reaction and selection times compared to Static Arrow guidance, thereby enhancing the overall task performance in a multitask environment. Hence, when visual guidance is essential in multitask settings, interaction designers should consider incorporating dynamic effects, such as an oscillating arrow visual guide to indicate target direction.
  • Haptic feedback may serve as a highly effective guidance modality in a multitask environment. Being more salient and easier to detect than visual guidance, especially in visually demanding environments, haptic feedback could lead to swifter response times. This allows users to focus their limited visual attention on a primary task, while still remaining attentive to the cues provided by the haptic modality.
  • In simpler task settings, designers should aim to keep visual guidance concise and straightforward. This could potentially mitigate the side effects of visual guidance in easier task settings, while still keeping users informed.
  • In a densely populated target setting, users tend to initiate their search from the center area. However, if the targets are sparsely distributed, users tend to begin their search according to their initial position. Therefore, if designers can control the target layout in a multitask environment, they can optimize the target searching and selection performances by placing important targets in suitable locations based on the complexity of the environment and users’ head movement patterns.
Such implications and guidelines could be applied across various user scenarios. For example, in driving scenarios, where maintaining continuous visual attention to the road is crucial, haptic feedback can effectively aid multitasking, particularly in monitoring rearview mirrors. For instance, a vehicle equipped with a haptic feedback system in the driver’s seat could alert the driver to vehicles approaching too closely from behind through distinct vibrations in the seat backrest. This method prompts the driver to check the rearview mirror without diverting attention from the road, thereby enhancing safety and task efficiency. This application of haptic feedback reduces the need for frequent visual checks and ensures continuous awareness of traffic conditions behind.

6. Conclusions and Future Work

In conclusion, this study advances our understanding of the roles that visual and haptic feedback can play in enhancing target search performance in dual-task settings. Dynamic visual guidance could improve reaction times and selection times of secondary tasks, and haptic guidance could expedite the reaction time of task switching and enhance user preference and task load by allowing for more intuitive and less cognitively demanding interactions. This is particularly crucial in environments where quick decision making and attention to multiple stimuli are essential. Furthermore, our research contributes valuable insights for the design of interfaces, facilitating smoother and more efficient multitasking in complex settings. These insights are instrumental for developers and designers aiming to create more adaptive and intuitive technological environments.
However, there are several limitations to this study. This study’s controlled environment may not have fully captured the complexities and variabilities encountered in real-world scenarios. Furthermore, our research was primarily focused on short-duration tasks; the impacts of visual and haptic feedback over extended periods, which could introduce factors such as fatigue or adaptation, were not explored. To address these limitations, future research should investigate the long-term adaptation and effectiveness of these feedback mechanisms. Additionally, testing the guidance systems in more realistic and varied environments would provide deeper insights into their practical applicability and performance across diverse conditions. By expanding the scope of research to include these elements, we can further refine the guidance techniques and enhance their potential benefits for real-world applications.

Author Contributions

Conceptualization, G.W. and H.-H.W.; methodology, G.W.; software, G.R.; validation, H.-H.W.; formal analysis, G.W. and G.R.; investigation, G.W.; resources, G.W.; data curation, G.W.; writing—original draft preparation, G.W.; writing—review and editing, G.W. and G.R.; visualization, G.W.; supervision, H.-H.W.; project administration, G.W.; funding acquisition, G.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

We appreciate all participants who took part in the studies.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
VRVirtual Reality;
HMDHead-Mounted Display;
MRTKMixed Reality Toolkit;
3DThree-Dimensional.

References

  1. Volonté, F.; Pugin, F.; Bucher, P.; Sugimoto, M.; Ratib, O.; Morel, P. Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: Not only a matter of fashion. J. Hepato Biliary Pancreat. Sci. 2011, 18, 506–509. [Google Scholar] [CrossRef] [PubMed]
  2. Renner, P.; Pfeiffer, T. Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems. In Proceedings of the 2017 IEEE Symposium on 3D User Interfaces, Los Angeles, CA, USA, 18–19 March 2017; pp. 186–194. [Google Scholar] [CrossRef]
  3. Schinke, T.; Henze, N.; Boll, S. Visualization of off-screen objects in mobile augmented reality. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, San Francisco, CA, USA, 21–24 September 2012; pp. 313–316. [Google Scholar] [CrossRef]
  4. Gruenefeld, U.; Lange, D.; Hammer, L.; Boll, S.; Heuten, W. FlyingARrow: Pointing Towards Out-of-View Objects on Augmented Reality Devices. In Proceedings of the 7th ACM International Symposium on Pervasive Displays, Munich, Germany, 6–8 June 2018; pp. 1–6. [Google Scholar] [CrossRef]
  5. Warden, A.C.; Wickens, C.D.; Mifsud, D.; Ourada, S.; Clegg, B.A.; Ortega, F.R. Visual Search in Augmented Reality: Effect of Target Cue Type and Location. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2022, 66, 373–377. [Google Scholar] [CrossRef]
  6. Baudisch, P.; Rosenholtz, R. Halo: A technique for visualizing off-screen objects. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 5–10 April 2003; pp. 481–488. [Google Scholar] [CrossRef]
  7. Biocca, F.; Tang, A.; Owen, C.; Xiao, F. Attention funnel: Omnidirectional 3D cursor for mobile augmented reality platforms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 22–27 April 2006; pp. 1115–1122. [Google Scholar] [CrossRef]
  8. Gustafson, S.; Baudisch, P.; Gutwin, C.; Irani, P. Wedge: Clutter-free visualization of off-screen locations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 5–10 April 2008; pp. 787–796. [Google Scholar] [CrossRef]
  9. Israr, A.; Kim, S.C.; Stec, J.; Poupyrev, I. Surround haptics: Tactile feedback for immersive gaming experiences. In Proceedings of the CHI ’12 Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA, 5–10 May 2012; pp. 1087–1090. [Google Scholar] [CrossRef]
  10. Elvitigala, D.S.; Boldu, R.; Nanayakkara, S.; Matthies, D.J.C. TickleFoot: Design, Development and Evaluation of a Novel Foot-Tickling Mechanism That Can Evoke Laughter. ACM Trans. Comput. Hum. Interact. 2022, 29, 20:1–20:23. [Google Scholar] [CrossRef]
  11. Asif, A.; Boll, S. Where to turn my car? comparison of a tactile display and a conventional car navigation system under high load condition. In Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Pittsburgh, PA, USA, 11–12 November 2010; pp. 64–71. [Google Scholar] [CrossRef]
  12. Prasad, M.; Taele, P.; Goldberg, D.; Hammond, T.A. HaptiMoto: Turn-by-turn haptic route guidance interface for motorcyclists. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; pp. 3597–3606. [Google Scholar] [CrossRef]
  13. Zhu, M.; Sun, Z.; Zhang, Z.; Shi, Q.; He, T.; Liu, H.; Chen, T.; Lee, C. Haptic-feedback smart glove as a creative human-machine interface (HMI) for virtual/augmented reality applications. Sci. Adv. 2020, 6, eaaz8693. [Google Scholar] [CrossRef] [PubMed]
  14. Pfeiffer, M.; Schneegass, S.; Alt, F.; Rohs, M. Let me grab this: A comparison of EMS and vibration for haptic feedback in free-hand interaction. In Proceedings of the 5th Augmented Human International Conference, New York, NY, USA, 7–9 March 2014; pp. 1–8. [Google Scholar] [CrossRef]
  15. Pezent, E.; O’Malley, M.K.; Israr, A.; Samad, M.; Robinson, S.; Agarwal, P.; Benko, H.; Colonnese, N. Explorations of Wrist Haptic Feedback for AR/VR Interactions with Tasbi. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 25–30 April 2020; pp. 1–4. [Google Scholar] [CrossRef]
  16. Ren, G.; Li, W.; O’Neill, E. Towards the design of effective freehand gestural interaction for interactive TV. arXiv 2016, arXiv:1603.08103. [Google Scholar] [CrossRef]
  17. Gaffary, Y.; Lécuyer, A. The Use of Haptic and Tactile Information in the Car to Improve Driving Safety: A Review of Current Technologies. Front. ICT 2018, 5. [Google Scholar] [CrossRef]
  18. Jamil, S.; Golding, A.; Floyd, H.L.; Capelli-Schellpfeffer, M. Human Factors in Electrical Safety. In Proceedings of the 2007 IEEE Petroleum and Chemical Industry Technical Conference, Calgary, AB, Canada, 17–19 September 2007; pp. 1–8. [Google Scholar]
  19. Wickens, C.D. Processing resources and attention. In Multiple Task Performance; CRC Press: Boca Raton, FL, USA, 1991. [Google Scholar]
  20. Sigrist, R.; Rauter, G.; Riener, R.; Wolf, P. Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review. Psychon. Bull. Rev. 2013, 20, 21–53. [Google Scholar] [CrossRef] [PubMed]
  21. Marquardt, A.; Trepkowski, C.; Eibich, T.D.; Maiero, J.; Kruijff, E.; Schoning, J. Comparing Non-Visual and Visual Guidance Methods for Narrow Field of View Augmented Reality Displays. IEEE Trans. Vis. Comput. Graph. 2020, 26, 3389–3401. [Google Scholar] [CrossRef] [PubMed]
  22. Tivadar, R.I.; Arnold, R.C.; Turoman, N.; Knebel, J.F.; Murray, M.M. Digital haptics improve speed of visual search performance in a dual-task setting. Sci. Rep. 2022, 12, 9728. [Google Scholar] [CrossRef]
  23. Chen, T.; Wu, Y.S.; Zhu, K. Investigating different modalities of directional cues for multi-task visual-searching scenario in virtual reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, New York, NY, USA, 28 November 2018; pp. 1–5. [Google Scholar] [CrossRef]
  24. Teber, D.; Guven, S.; Simpfendörfer, T.; Baumhauer, M.; Güven, E.O.; Yencilek, F.; Gözen, A.S.; Rassweiler, J. Augmented reality: A new tool to improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo results. Eur. Urol. 2009, 56, 332–338. [Google Scholar] [CrossRef]
  25. Wieland, J.; Garcia, R.C.H.; Reiterer, H.; Feuchtner, T. Arrow, Bézier Curve, or Halos?—Comparing 3D Out-of-View Object Visualization Techniques for Handheld Augmented Reality. In Proceedings of the 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Singapore, 17–21 October 2022; pp. 797–806. [Google Scholar] [CrossRef]
  26. Biswas, N.; Singh, A.; Bhattacharya, S. Augmented 3D arrows for visualizing off-screen Points of Interest without clutter. Displays 2023, 79, 102502. [Google Scholar] [CrossRef]
  27. Markov-Vetter, D.; Luboschik, M.; Islam, A.T.; Gauger, P.; Staadt, O. The Effect of Spatial Reference on Visual Attention and Workload during Viewpoint Guidance in Augmented Reality. In Proceedings of the 2020 ACM Symposium on Spatial User Interaction, New York, NY, USA, 17–19 September 2020; pp. 1–10. [Google Scholar] [CrossRef]
  28. Woodworth, J.W.; Yoshimura, A.; Lipari, N.G.; Borst, C.W. Design and Evaluation of Visual Cues for Restoring and Guiding Visual Attention in Eye-Tracked VR. In Proceedings of the 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Shanghai, China, 25–29 March 2023; pp. 442–450. [Google Scholar] [CrossRef]
  29. Yeh, M.; Wickens, C.D. Display Signaling in Augmented Reality: Effects of Cue Reliability and Image Realism on Attention Allocation and Trust Calibration. Hum. Factors 2001, 43, 355–365. [Google Scholar] [CrossRef] [PubMed]
  30. Dixon, B.J.; Daly, M.J.; Chan, H.; Vescan, A.D.; Witterick, I.J.; Irish, J.C. Surgeons blinded by enhanced navigation: The effect of augmented reality on attention. Surg. Endosc. 2013, 27, 454–461. [Google Scholar] [CrossRef] [PubMed]
  31. Vatavu, R.D.; Vanderdonckt, J. Design Space and Users’ Preferences for Smartglasses Graphical Menus: A Vignette Study. In Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia, New York, NY, USA, 22 November 2020; pp. 1–12. [Google Scholar] [CrossRef]
  32. Israr, A.; Poupyrev, I. Tactile brush: Drawing on skin with a tactile grid display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 2019–2028. [Google Scholar] [CrossRef]
  33. Israr, A.; Poupyrev, I. Control space of apparent haptic motion. In Proceedings of the 2011 IEEE World Haptics Conference, Istanbul, Turkey, 21–24 June 2011; pp. 457–462. [Google Scholar] [CrossRef]
  34. Israr, A.; Zhao, S.; Schwalje, K.; Klatzky, R.; Lehman, J. Feel Effects: Enriching Storytelling with Haptic Feedback. Acm Trans. Appl. Percept. 2014, 11, 11:1–11:17. [Google Scholar] [CrossRef]
  35. Schneider, O.S.; Israr, A.; MacLean, K.E. Tactile Animation by Direct Manipulation of Grid Displays. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, New York, NY, USA, 8–11 November 2015; pp. 21–30. [Google Scholar] [CrossRef]
  36. Elsayed, H.; Weigel, M.; Müller, F.; Schmitz, M.; Marky, K.; Günther, S.; Riemann, J.; Mühlhäuser, M. VibroMap: Understanding the Spacing of Vibrotactile Actuators across the Body. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2020, 4, 125:1–125:16. [Google Scholar] [CrossRef]
  37. Berning, M.; Braun, F.; Riedel, T.; Beigl, M. ProximityHat: A head-worn system for subtle sensory augmentation with tactile stimulation. In Proceedings of the 2015 ACM International Symposium on Wearable Computers; 2015; pp. 31–38. [Google Scholar] [CrossRef]
  38. de Jesus Oliveira, V.A.; Brayda, L.; Nedel, L.; Maciel, A. Designing a Vibrotactile Head-Mounted Display for Spatial Awareness in 3D Spaces. IEEE Trans. Vis. Comput. Graph. 2017, 23, 1409–1417. [Google Scholar] [CrossRef] [PubMed]
  39. Kaul, O.B.; Rohs, M. HapticHead: A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6 May 2017; pp. 3729–3740. [Google Scholar] [CrossRef]
  40. Kaul, O.B.; Rohs, M. Wearable head-mounted 3D tactile display application scenarios. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. Association for Computing Machinery, Florence, Italy, 6–9 September 2016; pp. 1163–1167. [Google Scholar] [CrossRef]
  41. Alirezaee, P.; Weill-Duflos, A.; Schlesinger, J.J.; Cooperstock, J.R. Exploring the Effectiveness of Haptic Alarm Displays for Critical Care Environments. In Proceedings of the 2020 IEEE Haptics Symposium (HAPTICS), Washington, DC, USA, 28–31 March 2020; pp. 948–954. [Google Scholar] [CrossRef]
  42. Mujibiya, A. Haptic feedback companion for Body Area Network using body-carried electrostatic charge. In Proceedings of the 2015 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 9–12 January 2015; pp. 571–572. [Google Scholar]
  43. Withana, A.; Groeger, D.; Steimle, J. Tacttoo: A Thin and Feel-Through Tattoo for On-Skin Tactile Output. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, Berlin, Germany, 14–17 October 2018; pp. 365–378. [Google Scholar] [CrossRef]
  44. Vo, D.B.; Brewster, S.A. Touching the invisible: Localizing ultrasonic haptic cues. In Proceedings of the 2015 IEEE World Haptics Conference (WHC), Evanston, IL, USA, 22–26 June 2015; pp. 368–373. [Google Scholar] [CrossRef]
  45. Harrington, K.; Large, D.R.; Burnett, G.; Georgiou, O. Exploring the Use of Mid-Air Ultrasonic Feedback to Enhance Automotive User Interfaces. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Toronto, ON, Canada, 23–25 September 2018; pp. 11–20. [Google Scholar] [CrossRef]
  46. Ratsamee, P.; Orita, Y.; Kuroda, Y.; Takemura, H. FlowHaptics: Mid-Air Haptic Representation of Liquid Flow. Appl. Sci. 2021, 11, 8447. [Google Scholar] [CrossRef]
  47. Lee, J.; Lee, G. Designing a Non-contact Wearable Tactile Display Using Airflows. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 183–194. [Google Scholar] [CrossRef]
  48. Han, T.; Anderson, F.; Irani, P.; Grossman, T. HydroRing: Supporting Mixed Reality Haptics Using Liquid Flow. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 14–17 October 2018; pp. 913–925. [Google Scholar] [CrossRef]
  49. Trepkowski, C.; Marquardt, A.; Eibich, T.D.; Shikanai, Y.; Maiero, J.; Kiyokawa, K.; Kruijff, E.; Schoning, J.; Konig, P. Multisensory Proximity and Transition Cues for Improving Target Awareness in Narrow Field of View Augmented Reality Displays. IEEE Trans. Vis. Comput. Graph. 2022, 28, 1342–1362. [Google Scholar] [CrossRef]
  50. Shelley, S.; Alonso, M.; Hollowood, J.; Pettitt, M.; Sharples, S.; Hermes, D.; Kohlrausch, A. Interactive Sonification of Curve Shape and Curvature Data. In Proceedings of the Haptic and Audio Interaction Design: 4th International Conference, HAID 2009, Dresden, Germany, 10–11 September 2009; pp. 51–60. [Google Scholar] [CrossRef]
  51. Ribeiro, F.; Florêncio, D.; Chou, P.A.; Zhang, Z. Auditory augmented reality: Object sonification for the visually impaired. In Proceedings of the 2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP), Banff, AB, Canada, 17–19 September 2012; pp. 319–324. [Google Scholar] [CrossRef]
  52. Blum, J.R.; Bouchard, M.; Cooperstock, J.R. What’s around Me? Spatialized Audio Augmented Reality for Blind Users with a Smartphone. In Proceedings of the Mobile and Ubiquitous Systems: Computing, Networking, and Services: 8th International ICST Conference, MobiQuitous 2011, Copenhagen, Denmark, 6–9 December 2011; pp. 49–62. [Google Scholar] [CrossRef]
  53. Katz, B.F.G.; Kammoun, S.; Parseihian, G.; Gutierrez, O.; Brilhault, A.; Auvray, M.; Truillet, P.; Denis, M.; Thorpe, S.; Jouffrais, C. NAVIG: Augmented reality guidance system for the visually impaired. Virtual Real. 2012, 16, 253–269. [Google Scholar] [CrossRef]
  54. Black, D.; Hettig, J.; Luz, M.; Hansen, C.; Kikinis, R.; Hahn, H. Auditory feedback to support image-guided medical needle placement. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 1655–1663. [Google Scholar] [CrossRef]
  55. Roodaki, H.; Navab, N.; Eslami, A.; Stapleton, C.; Navab, N. SonifEye: Sonification of Visual Information Using Physical Modeling Sound Synthesis. IEEE Trans. Vis. Comput. Graph. 2017, 23, 2366–2371. [Google Scholar] [CrossRef] [PubMed]
  56. Soto-Faraco, S.; Lyons, J.; Gazzaniga, M.; Spence, C.; Kingstone, A. The ventriloquist in motion: Illusory capture of dynamic information across sensory modalities. Cogn. Brain Res. 2002, 14, 139–146. [Google Scholar] [CrossRef] [PubMed]
  57. Hopkins, K.; Kass, S.J.; Blalock, L.D.; Brill, J.C. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario. Ergonomics 2017, 60, 692–700. [Google Scholar] [CrossRef] [PubMed]
  58. Ngo, M.K.; Spence, C. Auditory, tactile, and multisensory cues facilitate search for dynamic visual stimuli. Atten. Percept. Psychophys. 2010, 72, 1654–1665. [Google Scholar] [CrossRef] [PubMed]
  59. Koelewijn, T.; Bronkhorst, A.; Theeuwes, J. Competition between auditory and visual spatial cues during visual task performance. Exp. Brain Res. 2009, 195, 593–602. [Google Scholar] [CrossRef] [PubMed]
  60. Lehtinen, V.; Oulasvirta, A.; Salovaara, A.; Nurmi, P. Dynamic tactile guidance for visual search tasks. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, Cambridge, MA, USA, 7–10 October 2012; pp. 445–452. [Google Scholar]
  61. Chung, S.; Lee, K.; Oh, U. Understanding the Two-Step Nonvisual Omnidirectional Guidance for Target Acquisition in 3D Spaces. In Proceedings of the 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bari, Italy, 4–8 October 2021; pp. 339–346. [Google Scholar] [CrossRef]
  62. Strasburger, H.; Rentschler, I.; Juttner, M. Peripheral vision and pattern recognition: A review. J. Vis. 2011, 11, 13. [Google Scholar] [CrossRef] [PubMed]
  63. Marquardt, A.; Trepkowski, C.; Eibich, T.D.; Maiero, J.; Kruijff, E. Non-Visual Cues for View Management in Narrow Field of View Augmented Reality Displays. In Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 14–18 October 2019; pp. 190–201. [Google Scholar] [CrossRef]
  64. Ren, G.; Wei, S.; O’Neill, E.; Chen, F. Towards the Design of Effective Haptic and Audio Displays for Augmented Reality and Mixed Reality Applications. Adv. Multimed. 2018, 2018, 4517150. [Google Scholar] [CrossRef]
  65. Maltz, M.; Shinar, D. New Alternative Methods of Analyzing Human Behavior in Cued Target Acquisition. Hum. Factors 2003, 45, 281–295. [Google Scholar] [CrossRef]
  66. Wang, G.; Ren, G.; Hong, X.; Peng, X.; Li, W.; O’Neill, E. Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality. Information 2022, 13, 566. [Google Scholar] [CrossRef]
Figure 1. Experimental settings and arrow design.
Figure 1. Experimental settings and arrow design.
Applsci 14 04650 g001
Figure 2. Mean selection time, head rotation degree, and head movement distance results of Primary Task in Study 1.
Figure 2. Mean selection time, head rotation degree, and head movement distance results of Primary Task in Study 1.
Applsci 14 04650 g002
Figure 3. Mean reaction time, selection time, head rotation degree, and head movement distance results of Secondary Task in Study 1.
Figure 3. Mean reaction time, selection time, head rotation degree, and head movement distance results of Secondary Task in Study 1.
Applsci 14 04650 g003
Figure 4. Mean user preference and NASA TLX of Study 1.
Figure 4. Mean user preference and NASA TLX of Study 1.
Applsci 14 04650 g004
Figure 5. Experimental settings, system architecture, and haptic guidance design.
Figure 5. Experimental settings, system architecture, and haptic guidance design.
Applsci 14 04650 g005
Figure 6. Mean selection time, head rotation degree, and head movement distance results of Primary Task in Study 2.
Figure 6. Mean selection time, head rotation degree, and head movement distance results of Primary Task in Study 2.
Applsci 14 04650 g006
Figure 7. Mean reaction time, selection time, head rotation degree, and head movement distance results of Secondary Task in Study 2.
Figure 7. Mean reaction time, selection time, head rotation degree, and head movement distance results of Secondary Task in Study 2.
Applsci 14 04650 g007
Figure 8. Mean user preference and NASA TLX of Study 2.
Figure 8. Mean user preference and NASA TLX of Study 2.
Applsci 14 04650 g008
Table 1. User scenarios for visual and haptic feedback.
Table 1. User scenarios for visual and haptic feedback.
User Scenarios
Visual FeedbackSurgery [1,24], Manual assembly assistance [2], Tourism navigation [3], Piloting unmanned air vehicles [29], Endoscopic navigation [30]
Haptic FeedbackDriving [22], Target searching [60], Digital gaming [9], Interactive storytelling [34,35], Stress relief [10], Automobile navigation [11,12]
Multimodal FeedbackSurgery [54,55], Object detecting [49]
Table 2. Comparison of Static and Dynamic Arrows for Secondary Task in Study 1.
Table 2. Comparison of Static and Dynamic Arrows for Secondary Task in Study 1.
Static ArrowDynamic ArrowImprovement (%)
Reaction Time5.87 s4.66 s20.61%
Selection Time3.67 s3.32 s9.54%
Head Rotation Angle152.26°141.06°7.36%
Table 3. Comparison of visual feedback and visual–haptic feedback for Secondary Task in Study 2.
Table 3. Comparison of visual feedback and visual–haptic feedback for Secondary Task in Study 2.
Visual FeedbackVisual–Haptic FeedbackImprovement (%)
Reaction Time5.24 s3.86 s26.34%
User Preference4.257.7582.35%
Task load34.3123.8930.37%
Table 4. Comparison of mean selection times and head rotation angles for targets in side and center columns during Primary and Secondary Tasks in Study 2.
Table 4. Comparison of mean selection times and head rotation angles for targets in side and center columns during Primary and Secondary Tasks in Study 2.
Primary TaskSecondary Task
Selection
Time (s)
Head Rotation
Angle (°)
Selection
Time (s)
Head Rotation
Angle (°)
Columns 13.0179.923.12133.85
Columns 32.5967.794.11193.91
Columns 53.0682.953.25139.82
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, G.; Wang, H.-H.; Ren, G. Visual and Haptic Guidance for Enhancing Target Search Performance in Dual-Task Settings. Appl. Sci. 2024, 14, 4650. https://0-doi-org.brum.beds.ac.uk/10.3390/app14114650

AMA Style

Wang G, Wang H-H, Ren G. Visual and Haptic Guidance for Enhancing Target Search Performance in Dual-Task Settings. Applied Sciences. 2024; 14(11):4650. https://0-doi-org.brum.beds.ac.uk/10.3390/app14114650

Chicago/Turabian Style

Wang, Gang, Hung-Hsiang Wang, and Gang Ren. 2024. "Visual and Haptic Guidance for Enhancing Target Search Performance in Dual-Task Settings" Applied Sciences 14, no. 11: 4650. https://0-doi-org.brum.beds.ac.uk/10.3390/app14114650

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop