Spatial Audio Technology, Acoustical Science and Applied Psychoacoustics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Acoustics and Vibrations".

Deadline for manuscript submissions: closed (20 September 2022) | Viewed by 3216

Special Issue Editor


E-Mail Website
Guest Editor
School of Medical Sciences, University of Sydney, Sydney, NSW 2006, Australia
Interests: applied psychoacoustics; auditory display; influence of room acoustics on reproduced sound; musical acoustics and musical timbre perception; sound quality engineering; spatial hearing; spatial sound processing; subjective evaluation of audio signals and systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Given the recent rapid advances in interactive spatial audio technologies that are utilized in virtual and augmented reality applications, a need was identified for a Special Issue that could bring together reports on related research results of experts in this field, spanning the cognate disciplines of acoustical science, audio engineering practice, and applied psychoacoustics.  Submissions are sought that demonstrate the productivity of research projects that attempt to bridge the gap between these disciplines to produce new knowledge regarding what technological solutions work best in practical use cases for functional virtual auditory display (VAD) systems. Although work employing loudspeaker-based spatial audio technologies is not strictly outside of the scope of this Special Issue, we particularly encourage submissions that report on research and development of headphone-based VAD systems for head-tracking virtual and augmented reality applications. With the advent of real-time user interaction made available by VAD systems that support the user’s active localization of virtual sound sources, an avenue has been opened for enabling more productive research and development of spatial audio processing that exhibits more optimized use of head-related transfer functions (HRTFs).  This call for submissions seeks to collect reports on research results that have demonstrated the advantages of such real-time user interaction with displayed virtual sound sources, which includes the listener’s adaptation to generic HRTFs, and also includes the personalization of customizable HRTFs employed by a VAD system for more optimal performance.

Dr. William L. Martens
Guest Editor

Keywords

  • virtual auditory display
  • sound localization
  • interactive spatial audio processing
  • personalized head-related transfer functions
  • head-mounted displays
  • head movements

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

8 pages, 857 KiB  
Article
Effects of Visually Induced Self-Motion on Sound Localization Accuracy
by Akio Honda, Kei Maeda, Shuichi Sakamoto and Yôiti Suzuki
Appl. Sci. 2022, 12(1), 173; https://0-doi-org.brum.beds.ac.uk/10.3390/app12010173 - 24 Dec 2021
Cited by 1 | Viewed by 2365
Abstract
The deterioration of sound localization accuracy during a listener’s head/body rotation is independent of the listener’s rotation velocity. However, whether this deterioration occurs only during physical movement in a real environment remains unclear. In this study, we addressed this question by subjecting physically [...] Read more.
The deterioration of sound localization accuracy during a listener’s head/body rotation is independent of the listener’s rotation velocity. However, whether this deterioration occurs only during physical movement in a real environment remains unclear. In this study, we addressed this question by subjecting physically stationary listeners to visually induced self-motion, i.e., vection. Two conditions—one with a visually induced perception of self-motion (vection) and the other without vection (control)—were adopted. Under both conditions, a short noise burst (30 ms) was presented via a loudspeaker in a circular array placed horizontally in front of a listener. The listeners were asked to determine whether the acoustic stimulus was localized relative to their subjective midline. The results showed that in terms of detection thresholds based on the subjective midline, the sound localization accuracy was lower under the vection condition than under the control condition. This indicates that sound localization can be compromised under visually induced self-motion perception. These findings support the idea that self-motion information is crucial for auditory space perception and can potentially enable the design of dynamic binaural displays requiring fewer computational resources. Full article
Show Figures

Figure 1

Back to TopTop