Visual Search in (Virtual) Reality

A special issue of Brain Sciences (ISSN 2076-3425). This special issue belongs to the section "Neural Engineering, Neuroergonomics and Neurorobotics".

Deadline for manuscript submissions: closed (25 December 2020) | Viewed by 44339

Special Issue Editor


E-Mail Website
Guest Editor
1. Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, China
2. Department of Psychology and Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
Interests: visual search; attention; visual learning and memory; virtual reality; fMRI

Special Issue Information

Dear Colleagues,

Visual search on 2D-screens has led to a wealth of insights into the processes that guide attention. In recent years, however, virtual reality technology has become more accessible and enables us to investigate visual search in more realistic immersive settings. Likewise, data recording in natural environments has become more reliable, e.g., for eye movement recordings or EEG. These technologies allow us to investigate new questions: Are the process assumptions derived from lab experiments still valid in the “real (or virtual) world”? Are there additional processes that arise in these more complex situations? How do the underlying neural processes and networks change? For this Special Issue, I invite you to contribute your research on visual search (broadly conceived) in virtual reality or natural situations. Basic research with normal or clinical populations is welcome, as are behavioral as well as neuroscientific methods.

Prof. Dr. Stefan Pollmann
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Visual search
  • Virtual reality
  • Mobile eye tracking
  • Mobile EEG
  • fMRI
  • Immersion

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 56463 KiB  
Article
Saliency-Aware Subtle Augmentation Improves Human Visual Search Performance in VR
by Olga Lukashova-Sanz and Siegfried Wahl
Brain Sci. 2021, 11(3), 283; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11030283 - 25 Feb 2021
Cited by 3 | Viewed by 2362
Abstract
Visual search becomes challenging when the time to find the target is limited. Here we focus on how performance in visual search can be improved via a subtle saliency-aware modulation of the scene. Specifically, we investigate whether blurring salient regions of the scene [...] Read more.
Visual search becomes challenging when the time to find the target is limited. Here we focus on how performance in visual search can be improved via a subtle saliency-aware modulation of the scene. Specifically, we investigate whether blurring salient regions of the scene can improve participant’s ability to find the target faster when the target is located in non-salient areas. A set of real-world omnidirectional images were displayed in virtual reality with a search target overlaid on the visual scene at a pseudorandom location. Participants performed a visual search task in three conditions defined by blur strength, where the task was to find the target as fast as possible. The mean search time, and the proportion of trials where participants failed to find the target, were compared across different conditions. Furthermore, the number and duration of fixations were evaluated. A significant effect of blur on behavioral and fixation metrics was found using linear mixed models. This study shows that it is possible to improve the performance by a saliency-aware subtle scene modulation in a challenging realistic visual search scenario. The current work provides an insight into potential visual augmentation designs aiming to improve user’s performance in everyday visual search tasks. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Figure 1

20 pages, 4349 KiB  
Article
Influence of Systematic Gaze Patterns in Navigation and Search Tasks with Simulated Retinitis Pigmentosa
by Alexander Neugebauer, Katarina Stingl, Iliya Ivanov and Siegfried Wahl
Brain Sci. 2021, 11(2), 223; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11020223 - 12 Feb 2021
Cited by 4 | Viewed by 2235
Abstract
People living with a degenerative retinal disease such as retinitis pigmentosa are oftentimes faced with difficulties navigating in crowded places and avoiding obstacles due to their severely limited field of view. The study aimed to assess the potential of different patterns of eye [...] Read more.
People living with a degenerative retinal disease such as retinitis pigmentosa are oftentimes faced with difficulties navigating in crowded places and avoiding obstacles due to their severely limited field of view. The study aimed to assess the potential of different patterns of eye movement (scanning patterns) to (i) increase the effective area of perception of participants with simulated retinitis pigmentosa scotoma and (ii) maintain or improve performance in visual tasks. Using a virtual reality headset with eye tracking, we simulated tunnel vision of 20° in diameter in visually healthy participants (n = 9). Employing this setup, we investigated how different scanning patterns influence the dynamic field of view—the average area over time covered by the field of view—of the participants in an obstacle avoidance task and in a search task. One of the two tested scanning patterns showed a significant improvement in both dynamic field of view (navigation 11%, search 7%) and collision avoidance (33%) when compared to trials without the suggested scanning pattern. However, participants took significantly longer (31%) to finish the navigation task when applying this scanning pattern. No significant improvements in search task performance were found when applying scanning patterns. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Figure 1

17 pages, 2121 KiB  
Article
Get Your Guidance Going: Investigating the Activation of Spatial Priors for Efficient Search in Virtual Reality
by Julia Beitner, Jason Helbing, Dejan Draschkow and Melissa L.-H. Võ
Brain Sci. 2021, 11(1), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11010044 - 04 Jan 2021
Cited by 12 | Viewed by 3388
Abstract
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. [...] Read more.
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Figure 1

12 pages, 1716 KiB  
Article
Preserved Contextual Cueing in Realistic Scenes in Patients with Age-Related Macular Degeneration
by Stefan Pollmann, Lisa Rosenblum, Stefanie Linnhoff, Eleonora Porracin, Franziska Geringswald, Anne Herbik, Katja Renner and Michael B. Hoffmann
Brain Sci. 2020, 10(12), 941; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10120941 - 07 Dec 2020
Cited by 1 | Viewed by 1918
Abstract
Foveal vision loss has been shown to reduce efficient visual search guidance due to contextual cueing by incidentally learned contexts. However, previous studies used artificial (T- among L-shape) search paradigms that prevent the memorization of a target in a semantically meaningful scene. Here, [...] Read more.
Foveal vision loss has been shown to reduce efficient visual search guidance due to contextual cueing by incidentally learned contexts. However, previous studies used artificial (T- among L-shape) search paradigms that prevent the memorization of a target in a semantically meaningful scene. Here, we investigated contextual cueing in real-life scenes that allow explicit memory of target locations in semantically rich scenes. In contrast to the contextual cueing deficits in artificial scenes, contextual cueing in patients with age-related macular degeneration (AMD) did not differ from age-matched normal-sighted controls. We discuss this in the context of visuospatial working-memory demands for which both eye movement control in the presence of central vision loss and memory-guided search may compete. Memory-guided search in semantically rich scenes may depend less on visuospatial working memory than search in abstract displays, potentially explaining intact contextual cueing in the former but not the latter. In a practical sense, our findings may indicate that patients with AMD are less deficient than expected after previous lab experiments. This shows the usefulness of realistic stimuli in experimental clinical research. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Graphical abstract

13 pages, 2035 KiB  
Article
Towards Interactive Search: Investigating Visual Search in a Novel Real-World Paradigm
by Marian Sauter, Maximilian Stefani and Wolfgang Mack
Brain Sci. 2020, 10(12), 927; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10120927 - 01 Dec 2020
Cited by 6 | Viewed by 2920
Abstract
An overwhelming majority of studies on visual search and selective attention were conducted using computer screens. There are arguably shortcomings in transferring knowledge from computer-based studies to real-world search behavior as findings are based on viewing static pictures on computer screens. This does [...] Read more.
An overwhelming majority of studies on visual search and selective attention were conducted using computer screens. There are arguably shortcomings in transferring knowledge from computer-based studies to real-world search behavior as findings are based on viewing static pictures on computer screens. This does not go well with the dynamic and interactive nature of vision in the real world. It is crucial to take visual search research to the real world in order to study everyday visual search processes. The aim of the present study was to develop an interactive search paradigm that can serve as a “bridge” between classical computerized search and everyday interactive search. We based our search paradigm on simple LEGO® bricks arranged on tabletop trays to ensure comparability with classical computerized visual search studies while providing room for easily increasing the complexity of the search environment. We found that targets were grasped slower when there were more distractors (Experiment 1) and there were sizable differences between various search conditions (Experiment 2), largely in line with classical visual search research and revealing similarities to research in natural scenes. Therefore, our paradigm can be seen as a valuable asset complementing visual search research in an environment between computerized search and everyday search. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Figure 1

26 pages, 11389 KiB  
Article
Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment
by Erwan David, Julia Beitner and Melissa Le-Hoa Võ
Brain Sci. 2020, 10(11), 841; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10110841 - 12 Nov 2020
Cited by 17 | Viewed by 3620 | Correction
Abstract
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view [...] Read more.
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Figure 1

18 pages, 1030 KiB  
Article
Crowding Effects across Depth Are Fixation-Centered for Defocused Flankers and Observer-Centered for Defocused Targets
by Lisa V. Eberhardt and Anke Huckauf
Brain Sci. 2020, 10(9), 596; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10090596 - 28 Aug 2020
Cited by 3 | Viewed by 2255
Abstract
Depth needs to be considered to understand visual information processing in cluttered environments in the wild. Since differences in depth depend on current gaze position, eye movements were avoided by short presentations in a real depth setup. Thus, allowing only peripheral vision, crowding [...] Read more.
Depth needs to be considered to understand visual information processing in cluttered environments in the wild. Since differences in depth depend on current gaze position, eye movements were avoided by short presentations in a real depth setup. Thus, allowing only peripheral vision, crowding was tested. That is, the impairment of peripheral target recognition by the presence of nearby flankers was measured. Real depth was presented by a half-transparent mirror that aligned the displays of two orthogonally arranged, distance-adjustable screens. Fixation depth was at a distance of 190 cm, defocused depth planes were presented either near or far, in front of or behind the fixation depth, all within the depth of field. In Experiments 1 and 2, flankers were presented defocused, while the to-be-identified targets were on the fixation depth plane. In Experiments 3–5, targets were presented defocused, while the flankers were kept on the fixation depth plane. Results for defocused flankers indicate increased crowding effects with increased flanker distance from the target at focus (near to far). However, for defocused targets, crowding for targets in front of the focus as compared to behind was increased. Thus, defocused targets produce decreased crowding with increased target distance from the observer. To conclude, the effects of flankers in depth seem to be centered around fixation, while effects of target depth seem to be observer-centered. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Figure 1

19 pages, 4334 KiB  
Article
Virtual Reality Potentiates Emotion and Task Effects of Alpha/Beta Brain Oscillations
by David Schubring, Matthias Kraus, Christopher Stolz, Niklas Weiler, Daniel A. Keim and Harald Schupp
Brain Sci. 2020, 10(8), 537; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10080537 - 10 Aug 2020
Cited by 21 | Viewed by 5165
Abstract
The progress of technology has increased research on neuropsychological emotion and attention with virtual reality (VR). However, direct comparisons between conventional two-dimensional (2D) and VR stimulations are lacking. Thus, the present study compared electroencephalography (EEG) correlates of explicit task and implicit emotional attention [...] Read more.
The progress of technology has increased research on neuropsychological emotion and attention with virtual reality (VR). However, direct comparisons between conventional two-dimensional (2D) and VR stimulations are lacking. Thus, the present study compared electroencephalography (EEG) correlates of explicit task and implicit emotional attention between 2D and VR stimulation. Participants (n = 16) viewed angry and neutral faces with equal size and distance in both 2D and VR, while they were asked to count one of the two facial expressions. For the main effects of emotion (angry vs. neutral) and task (target vs. nontarget), established event related potentials (ERP), namely the late positive potential (LPP) and the target P300, were replicated. VR stimulation compared to 2D led to overall bigger ERPs but did not interact with emotion or task effects. In the frequency domain, alpha/beta-activity was larger in VR compared to 2D stimulation already in the baseline period. Of note, while alpha/beta event related desynchronization (ERD) for emotion and task conditions were seen in both VR and 2D stimulation, these effects were significantly stronger in VR than in 2D. These results suggest that enhanced immersion with the stimulus materials enabled by VR technology can potentiate induced brain oscillation effects to implicit emotion and explicit task effects. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Figure 1

9 pages, 771 KiB  
Article
Contextual-Cueing beyond the Initial Field of View—A Virtual Reality Experiment
by Nico Marek and Stefan Pollmann
Brain Sci. 2020, 10(7), 446; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10070446 - 13 Jul 2020
Cited by 6 | Viewed by 3068
Abstract
In visual search, participants can incidentally learn spatial target-distractor configurations, leading to shorter search times for repeated compared to novel configurations. Usually, this is tested within the limited visual field provided by a computer monitor. While contextual cueing is typically investigated on two-dimensional [...] Read more.
In visual search, participants can incidentally learn spatial target-distractor configurations, leading to shorter search times for repeated compared to novel configurations. Usually, this is tested within the limited visual field provided by a computer monitor. While contextual cueing is typically investigated on two-dimensional screens, we present for the first time an implementation of a classic contextual cueing task (search for a T-shape among L-shapes) in a three-dimensional virtual environment. This enabled us to test if the typical finding of incidental learning of repeated search configurations, manifested by shorter search times, would hold in a three-dimensional virtual reality (VR) environment. One specific aspect that was tested by combining virtual reality and contextual cueing was if contextual cueing would hold for targets outside the initial field of view (FOV), requiring head movements to be found. In keeping with two-dimensional search studies, reduced search times were observed after the first epoch and remained stable in the remaining experiment. Importantly, comparable search time reductions were observed for targets both within and outside of the initial FOV. The results show that a repeated distractors-only configuration in the initial FOV can guide search for target locations requiring a head movement to be seen. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Figure 1

Other

Jump to: Research

1 pages, 185 KiB  
Correction
Correction: David et al. Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment. Brain Sci. 2020, 10, 841
by Erwan David, Julia Beitner and Melissa Le-Hoa Võ
Brain Sci. 2021, 11(9), 1215; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci11091215 - 15 Sep 2021
Viewed by 1115
Abstract
We wish to make the following correction to the published paper “Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment” [...] Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
11 pages, 682 KiB  
Opinion
Building, Hosting and Recruiting: A Brief Introduction to Running Behavioral Experiments Online
by Marian Sauter, Dejan Draschkow and Wolfgang Mack
Brain Sci. 2020, 10(4), 251; https://0-doi-org.brum.beds.ac.uk/10.3390/brainsci10040251 - 24 Apr 2020
Cited by 77 | Viewed by 13829
Abstract
Researchers have ample reasons to take their experimental studies out of the lab and into the online wilderness. For some, it is out of necessity, due to an unforeseen laboratory closure or difficulties in recruiting on-site participants. Others want to benefit from the [...] Read more.
Researchers have ample reasons to take their experimental studies out of the lab and into the online wilderness. For some, it is out of necessity, due to an unforeseen laboratory closure or difficulties in recruiting on-site participants. Others want to benefit from the large and diverse online population. However, the transition from in-lab to online data acquisition is not trivial and might seem overwhelming at first. To facilitate this transition, we present an overview of actively maintained solutions for the critical components of successful online data acquisition: creating, hosting and recruiting. Our aim is to provide a brief introductory resource and discuss important considerations for researchers who are taking their first steps towards online experimentation. Full article
(This article belongs to the Special Issue Visual Search in (Virtual) Reality)
Show Figures

Graphical abstract

Back to TopTop