What We Bring to the Perceptual Table: Perceiver-Based Factors in Visual Perception

A special issue of Vision (ISSN 2411-5150).

Deadline for manuscript submissions: closed (15 December 2021) | Viewed by 11933

Special Issue Editors

Department of Psychology, Macquarie University, Sydney, NSW, Australia
Interests: perceptual expertise; face perception; object perception; visual working memory; visual learning; attention
School of Psychology, University of Sydney, Sydney, Australia
Interests: object perception; attention; decision making; neural decoding; human brain imaging
School of Psychology, UNSW Sydney, Kensington, NSW, Australia
Interests: attention; perception; emotion; cognition–emotion interactions

Special Issue Information

Dear Colleagues,

How do experience and knowledge shape what we see and how we see it? For decades, the debate has raged over whether visual perception is shaped by experience, knowledge, and other perceiver-based factors such as motivation, learning, preference, and emotion. This Special Issue aims to reframe this “whether or not” debate and instead embrace the complexity and variety of findings related to the influence of perceiver-based factors on visual perception. When assembled together, contributions to this Special Issue will paint a picture of a perceptual landscape, highlighting which aspects of perceptual processing may be open to such influences and which aspects may be immune to them. We invite articles looking at all aspects of visual processing, including, but not limited to, object recognition, perceptual organization, attention, and visual working memory.

Dr. Kim M. Curby
Dr. Thomas A. Carlson
Dr. Steven B. Most
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Vision is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • perceiver-based factors
  • top-down effects
  • experience and learning
  • categories and concepts
  • knowledge
  • motivation and emotion
  • individual differences
  • cognitive penetrability

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

14 pages, 866 KiB  
Article
When It’s Not Worn on the Face: Trait Anxiety and Attention to Neutral Faces Semantically Linked to Threat
by Kim M. Curby and Jessica A. Collins
Vision 2024, 8(1), 15; https://0-doi-org.brum.beds.ac.uk/10.3390/vision8010015 - 19 Mar 2024
Viewed by 357
Abstract
While our direct observations of the features or behaviours of the stimuli around us tell us much about them (e.g., should they be feared?), the origin of much of our knowledge is often untethered from directly observable properties (e.g., through what we have [...] Read more.
While our direct observations of the features or behaviours of the stimuli around us tell us much about them (e.g., should they be feared?), the origin of much of our knowledge is often untethered from directly observable properties (e.g., through what we have learned or have been told about them, or “semantic knowledge”). Here, we ask whether otherwise neutral visual stimuli that participants learn to associate with emotional qualities in the lab cause the stimuli to be attended in a similar way as stimuli whose emotional qualities can be discerned through their visual properties. In Experiment 1, participants learned to associate negative or neutral characteristics with neutral faces, which then served as valid or invalid spatial cues to targets in an attentional disengagement paradigm. The performance of participants higher in trait anxiety was consistent with attentional avoidance of faces with learned negative associations, while participants lower in trait anxiety showed a general response slowing in trials with these stimuli, compared to those with neutral associations. In contrast, in Experiment 2, using (visually) expressive (angry) faces, the performance of participants higher in trait anxiety was consistent with difficulty disengaging from visually threatening faces, while the performance of those with lower trait anxiety appeared unaffected by the valence of the stimuli. These findings suggest that (1) emotionality acquired indirectly via learned semantic knowledge impacts how attention is allocated to face stimuli, and this impact is influenced by trait anxiety, and (2) there are differences in the effects of stimulus emotionality depending on whether it is acquired indirectly or directly via the perceptual features of the stimulus. These differences are discussed in the context of the variability of attention bias effects reported in the literature and the time course of impacts of emotionality on stimulus processing. Full article
Show Figures

Figure 1

12 pages, 933 KiB  
Article
Mimicking Facial Expressions Facilitates Working Memory for Stimuli in Emotion-Congruent Colours
by Thaatsha Sivananthan, Steven B. Most and Kim M. Curby
Vision 2024, 8(1), 4; https://0-doi-org.brum.beds.ac.uk/10.3390/vision8010004 - 30 Jan 2024
Viewed by 973
Abstract
It is one thing for everyday phrases like “seeing red” to link some emotions with certain colours (e.g., anger with red), but can such links measurably bias information processing? We investigated whether emotional face information (angry/happy/neutral) held in visual working memory (VWM) enhances [...] Read more.
It is one thing for everyday phrases like “seeing red” to link some emotions with certain colours (e.g., anger with red), but can such links measurably bias information processing? We investigated whether emotional face information (angry/happy/neutral) held in visual working memory (VWM) enhances memory for shapes presented in a conceptually consistent colour (red or green) (Experiment 1). Although emotional information held in VWM appeared not to bias memory for coloured shapes in Experiment 1, exploratory analyses suggested that participants who physically mimicked the face stimuli were better at remembering congruently coloured shapes. Experiment 2 confirmed this finding by asking participants to hold the faces in mind while either mimicking or labelling the emotional expressions of face stimuli. Once again, those who mimicked the expressions were better at remembering shapes with emotion-congruent colours, whereas those who simply labelled them were not. Thus, emotion–colour associations appear powerful enough to guide attention, but—consistent with proposed impacts of “embodied emotion” on cognition—such effects emerged when emotion processing was facilitated through facial mimicry. Full article
Show Figures

Figure 1

16 pages, 1827 KiB  
Communication
Satisfaction of Search Can Be Ameliorated by Perceptual Learning: A Proof-of-Principle Study
by Erin Park, Fallon Branch and Jay Hegdé
Vision 2022, 6(3), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/vision6030049 - 10 Aug 2022
Viewed by 1365
Abstract
When searching a visual image that contains multiple target objects of interest, human subjects often show a satisfaction of search (SOS) effect, whereby if the subjects find one target, they are less likely to find additional targets in the image. Reducing SOS or, [...] Read more.
When searching a visual image that contains multiple target objects of interest, human subjects often show a satisfaction of search (SOS) effect, whereby if the subjects find one target, they are less likely to find additional targets in the image. Reducing SOS or, equivalently, subsequent search miss (SSM), is of great significance in many real-world situations where it is of paramount importance to find all targets in a given image, not just one. However, studies have shown that even highly trained and experienced subjects, such as expert radiologists, are subject to SOS. Here, using the detection of camouflaged objects (or camouflage-breaking) as an illustrative case, we demonstrate that when naïve subjects are trained to detect camouflaged objects more effectively, it has the side effect of reducing subjects’ SOS. We tested subjects in the SOS task before and after they were trained in camouflage-breaking. During SOS testing, subjects viewed naturalistic scenes that contained zero, one, or two targets, depending on the image. As expected, before camouflage-training, subjects showed a strong SOS effect, whereby if they had found a target with relatively high visual saliency in a given image, they were less likely to have also found a lower-saliency target when one existed in the image. Subjects were then trained in the camouflage-breaking task to criterion using non-SOS images, i.e., camouflage images that contained zero or one target. Surprisingly, the trained subjects no longer showed significant levels of SOS. This reduction was specific to the particular background texture in which the subjects received camouflage training; subjects continued to show significant SOS when tested using a different background texture in which they did not receive camouflage training. A separate experiment showed that the reduction in SOS was not attributable to non-specific exposure or practice effects. Together, our results demonstrate that perceptual expertise can, in principle, reduce SOS, even when the perceptual training does not specifically target SOS reduction. Full article
Show Figures

Figure 1

12 pages, 1488 KiB  
Article
Preference at First Sight: Effects of Shape and Font Qualities on Evaluation of Object-Word Pairs
by Olivia S. Cheung, Oliver Heyn and Tobiasz Trawiński
Vision 2022, 6(2), 22; https://0-doi-org.brum.beds.ac.uk/10.3390/vision6020022 - 12 Apr 2022
Viewed by 2150
Abstract
Subjective preferences for visual qualities of shapes and fonts have been separately reported. Such preferences are often similarly attributed to factors such as aesthetic impressions, attributed meaning from the visual properties, or processing fluency. Because shapes and fonts were rarely studied together, we [...] Read more.
Subjective preferences for visual qualities of shapes and fonts have been separately reported. Such preferences are often similarly attributed to factors such as aesthetic impressions, attributed meaning from the visual properties, or processing fluency. Because shapes and fonts were rarely studied together, we investigated whether these qualities had a similar impact on preference judgment of object-word pairs. Each pair consisted of an abstract object with either preferred or disliked shape qualities and a pseudoword with either preferred or disliked font qualities. We found that only shape qualities, but not font qualities, influenced preference ratings of the object-word pairs, with higher preferences for pairs with preferred than disliked shapes. Moreover, eye movement results indicated that while participants fixated the word before the object, their prolonged fixation on the object when first attending to it might have contributed to the preference ratings. Nonetheless, other measures, including response times, total fixation numbers, and total dwell time, showed different patterns for shape and font qualities, revealing that participants attended more to objects with preferred than disliked shapes, and to words with disliked than preferred fonts. Taken together, these results suggest that shape and font qualities have differential influences on preferences and processing of objects and words. Full article
Show Figures

Figure 1

21 pages, 1635 KiB  
Article
Semantic Expectation Effects on Object Detection: Using Figure Assignment to Elucidate Mechanisms
by Rachel M. Skocypec and Mary A. Peterson
Vision 2022, 6(1), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/vision6010019 - 21 Mar 2022
Cited by 2 | Viewed by 2510
Abstract
Recent evidence suggesting that object detection is improved following valid rather than invalid labels implies that semantics influence object detection. It is not clear, however, whether the results index object detection or feature detection. Further, because control conditions were absent and labels and [...] Read more.
Recent evidence suggesting that object detection is improved following valid rather than invalid labels implies that semantics influence object detection. It is not clear, however, whether the results index object detection or feature detection. Further, because control conditions were absent and labels and objects were repeated multiple times, the mechanisms are unknown. We assessed object detection via figure assignment, whereby objects are segmented from backgrounds. Masked bipartite displays depicting a portion of a mono-oriented object (a familiar configuration) on one side of a central border were shown once only for 90 or 100 ms. Familiar configuration is a figural prior. Accurate detection was indexed by reports of an object on the familiar configuration side of the border. Compared to control experiments without labels, valid labels improved accuracy and reduced response times (RTs) more for upright than inverted objects (Studies 1 and 2). Invalid labels denoting different superordinate-level objects (DSC; Study 1) or same superordinate-level objects (SSC; Study 2) reduced accuracy for upright displays only. Orientation dependency indicates that effects are mediated by activated object representations rather than features which are invariant over orientation. Following invalid SSC labels (Study 2), accurate detection RTs were longer than control for both orientations, implicating conflict between semantic representations that had to be resolved before object detection. These results demonstrate that object detection is not just affected by semantics, it entails semantics. Full article
Show Figures

Figure 1

Other

Jump to: Research

12 pages, 976 KiB  
Brief Report
Visual Search Asymmetry Due to the Relative Magnitude Represented by Number Symbols
by Benjamin A. Motz, Robert L. Goldstone, Thomas A. Busey and Richard W. Prather
Vision 2021, 5(3), 42; https://0-doi-org.brum.beds.ac.uk/10.3390/vision5030042 - 17 Sep 2021
Viewed by 2310
Abstract
In visual search tasks, physically large target stimuli are more easily identified among small distractors than are small targets among large distractors. The present study extends this finding by presenting preliminary evidence of a new search asymmetry: stimuli that symbolically represent larger magnitude [...] Read more.
In visual search tasks, physically large target stimuli are more easily identified among small distractors than are small targets among large distractors. The present study extends this finding by presenting preliminary evidence of a new search asymmetry: stimuli that symbolically represent larger magnitude are identified more easily among featurally equivalent distractors that represent smaller magnitude. Participants performed a visual search task using line-segment digits representing the numbers 2 and 5, and the numbers 6 and 9, as well as comparable non-numeric control stimuli. In three experiments, we found that search times are faster when the target is a digit that represents a larger magnitude than the distractor, although this pattern was not evident in one additional experiment. The results provide suggestive evidence that the magnitude of a number symbol can affect perceptual comparisons between number symbols, and that the semantic meaning of a target stimulus can systematically affect visual search. Full article
Show Figures

Figure 1

Back to TopTop