Mobile Spatial Audio

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Acoustics and Vibrations".

Deadline for manuscript submissions: closed (15 May 2019) | Viewed by 7776

Special Issue Editor


E-Mail Website
Guest Editor
Polytechnic University of Valencia, 46022 Valencia, Spain
Interests: audio; acoustics; signal processing; machine learning; multimedia
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Spatial audio technologies have gained great popularity in recent years with the application to modern fields such as Virtual Reality (VR), high-definition TV (HDTV), gaming, and mobile devices. Among these applications, smartphones and other mobile devices represent a great opportunity for novel developments and also have a great commercial potential. In particular, the growing popularity of headphones as a way of listening to mobile contents, suggests future research and development guidelines. Although spatial audio over headphones is a mature field, there are still important challenges to cope with, such as realistic binaural rendering and individualization of the head-related transfer function (HRTF). Moreover, mobile devices include positioning chips (accelerometers, gyroscopes) that can be employed in Virtual and Augmented Reality audio applications. Spatial audio recording with smartphones remains a seriously unaddressed gap that has to be solved in the future. Efficient signal processing methods are important in mobile devices to extend battery duration. Also, the introduction of GPU and multi-core processors facilitates the implementation of high-quality spatial audio solutions.

Prof. Dr. Jose J. Lopez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Individualization of HRTF in mobile devices
  • HRTF Interpolation
  • Head-tracking on mobile phones
  • Immersive mobile audio for Virtual Reality
  • Augmented audio reality on smartphones
  • Headphones correction on smartphones
  • Spatial audio enhancement in mobile recordings
  • Microphone array processing in mobile devices
  • User interaction for spatial audio
  • Efficient signal processing methods for mobile devices
  • GPU use in mobile devices for spatial audio

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 2558 KiB  
Article
Individualized Interaural Feature Learning and Personalized Binaural Localization Model
by Xiang Wu, Dumidu S. Talagala, Wen Zhang and Thushara D. Abhayapala
Appl. Sci. 2019, 9(13), 2682; https://0-doi-org.brum.beds.ac.uk/10.3390/app9132682 - 30 Jun 2019
Cited by 4 | Viewed by 2620
Abstract
The increasing importance of spatial audio technologies has demonstrated the need and importance of correctly adapting to the individual characteristics of the human auditory system, and illustrates the crucial need for humanoid localization systems for testing these technologies. To this end, this paper [...] Read more.
The increasing importance of spatial audio technologies has demonstrated the need and importance of correctly adapting to the individual characteristics of the human auditory system, and illustrates the crucial need for humanoid localization systems for testing these technologies. To this end, this paper introduces a novel feature analysis and selection approach for binaural localization and builds a probabilistic localization mapping model, especially useful for the vertical dimension localization. The approach uses the mutual information as a metric to evaluate the most significant frequencies of the interaural phase difference and interaural level difference. Then, by using the random forest algorithm and embedding the mutual information as a feature selection criteria, the feature selection procedures are encoded with the training of the localization mapping. The trained mapping model is capable of using interaural features more efficiently, and, because of the multiple-tree-based model structure, the localization model shows robust performance to noise and interference. By integrating the direct path relative transfer function estimation, we propose to devise a novel localization approach that has improved performance in the presence of noise and reverberation. The proposed mapping model is compared with the state-of-the-art manifold learning procedure in different acoustical configurations, and a more accurate and robust output can be observed. Full article
(This article belongs to the Special Issue Mobile Spatial Audio)
Show Figures

Figure 1

16 pages, 3291 KiB  
Article
Auditory Localization in Low-Bitrate Compressed Ambisonic Scenes
by Tomasz Rudzki, Ignacio Gomez-Lanzaco, Jessica Stubbs, Jan Skoglund, Damian T. Murphy and Gavin Kearney
Appl. Sci. 2019, 9(13), 2618; https://0-doi-org.brum.beds.ac.uk/10.3390/app9132618 - 28 Jun 2019
Cited by 14 | Viewed by 4837
Abstract
The increasing popularity of Ambisonics as a spatial audio format for streaming services poses new challenges to existing audio coding techniques. Immersive audio delivered to mobile devices requires an efficient bitrate compression that does not affect the spatial quality of the content. Good [...] Read more.
The increasing popularity of Ambisonics as a spatial audio format for streaming services poses new challenges to existing audio coding techniques. Immersive audio delivered to mobile devices requires an efficient bitrate compression that does not affect the spatial quality of the content. Good localizability of virtual sound sources is one of the key elements that must be preserved. This study was conducted to investigate the localization precision of virtual sound source presentations within Ambisonic scenes encoded with Opus low-bitrate compression at different bitrates and Ambisonic orders (1st, 3rd, and 5th). The test stimuli were reproduced over a 50-channel spherical loudspeaker configuration and binaurally using individually measured and generic Head-Related Transfer Functions (HRTFs). Participants were asked to adjust the position of a virtual acoustic pointer to match the position of virtual sound source within the bitrate-compressed Ambisonic scene. Results show that auditory localization in low-bitrate compressed Ambisonic scenes is not significantly affected by codec parameters. The key factors influencing localization are the rendering method and Ambisonic order truncation. This suggests that efficient perceptual coding might be successfully used for mobile spatial audio delivery. Full article
(This article belongs to the Special Issue Mobile Spatial Audio)
Show Figures

Figure 1

Back to TopTop