Processing Techniques Applied to Audio, Image and Brain Signals

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Acoustics and Vibrations".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 8483

Special Issue Editors


E-Mail Website
Guest Editor
Application of Information and Communication Technologies (ATIC) Research Group, ETSI Telecomunicación, Campus Universitario de Teatinos s/n, 29071 Malaga, Spain
Interests: serious games; digital audio and image processing; pattern analysis and recognition and applications of signal processing techniques and methods
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
ETSI Telecomunicación, Campus Universitario de Teatinos s/n, 29071 Malaga, Spain
Interests: musical acoustics; signal processing; multimedia applications; audio content analysis; serious games and new methods for music learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Nowadays, digital data of a very diverse nature are continuously captured for their application in the different aspects of our society. The availability of these data, together with traditional or more recent processing techniques in the fields of digital image processing, sound and music analysis, etc., makes it possible to create new applications and a greater understanding of the environment that surrounds us.

At the same time that audio and image signals are processed, brain activity can be observed to gather deeper knowledge of its behavior in the face of visual or auditory stimuli or activities. Furthermore, the application of image and sound processing techniques can be considered simultaneously with the analysis of the signals extracted from brain activity, to give rise to new forms of data interpretation and exploration.

Hence, this Special Issue is aimed at the dissemination, to the international research community, of novel algorithms, methods and developments of image and sound processing techniques aimed at devising novel applications for our society, and the use of signal processing techniques to brain activity signals (acquired by means of EEG, or other recording techniques) to gain better understanding of the brain, specifically in relation to image or sound.

Thus, among others, the following research topics are considered:

  • Applied audio signal processing:
    • Applied speech signal processing: analysis, generation, identification, source separation, etc.
    • Applied music signal processing: music transcription, interaction, creation, etc.
  • Applied image processing:
    • Applied image classification and retrieval: object identification, optical character recognition, optical music recognition, etc.
    • Applied image enhancement, segmentation, transformation, etc.
  • Applied brain signal processing:
    • EEG, MEG, fMRI, etc., signal processing.
    • Applied joint processing of visual stimuli and brain signals.
    • Music signals and brain signal activity processing: mood, learning, listening, etc.
    • Interaction and interfaces.

This Special Issue, entitled “Processing Techniques Applied to Audio, Image and Brain Signals” will contribute to disseminating current advances in the development of new applications in the audio and image processing fields and the understanding of the link between image and audio signals and brain activity.

Prof. Dr. Lorenzo J. Tardón
Prof. Dr. Isabel Barbancho
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • audio signal processing
  • speech processing
  • music signal processing
  • music transcription
  • image processing
  • classification
  • recognition
  • brain signal processing
  • EEG, MEG, NIRX, fMRI signals
  • music and the brain
  • sound and the brain

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3471 KiB  
Article
A New Method for Detecting Onset and Offset for Singing in Real-Time and Offline Environments
by Behnam Faghih, Sutirtha Chakraborty, Azeema Yaseen and Joseph Timoney
Appl. Sci. 2022, 12(15), 7391; https://0-doi-org.brum.beds.ac.uk/10.3390/app12157391 - 22 Jul 2022
Cited by 3 | Viewed by 2527
Abstract
This paper introduces a new method for detecting onsets, offsets, and transitions of the notes in real-time solo singing performances. It identifies the onsets and offsets by finding the transitions from one note to another by considering trajectory changes in the fundamental frequencies. [...] Read more.
This paper introduces a new method for detecting onsets, offsets, and transitions of the notes in real-time solo singing performances. It identifies the onsets and offsets by finding the transitions from one note to another by considering trajectory changes in the fundamental frequencies. The accuracy of our approach is compared with eight well-known algorithms. It was tested with two datasets that contained 130 files of singing. The total duration of the datasets was more than seven hours and had more than 41,000 onset annotations. The analysis metrics used include the Average, the F-Measure Score, and ANOVA. The proposed algorithm was observed to determine onsets and offsets more accurately than the other algorithms. Additionally, unlike the other algorithms, the proposed algorithm can detect the transitions between notes. Full article
(This article belongs to the Special Issue Processing Techniques Applied to Audio, Image and Brain Signals)
Show Figures

Figure 1

22 pages, 2100 KiB  
Article
Smart-Median: A New Real-Time Algorithm for Smoothing Singing Pitch Contours
by Behnam Faghih and Joseph Timoney
Appl. Sci. 2022, 12(14), 7026; https://0-doi-org.brum.beds.ac.uk/10.3390/app12147026 - 12 Jul 2022
Cited by 4 | Viewed by 1712
Abstract
Pitch detection is usually one of the fundamental steps in audio signal processing. However, it is common for pitch detectors to estimate a portion of the fundamental frequencies incorrectly, especially in real-time environments and when applied to singing. Therefore, the estimated pitch contour [...] Read more.
Pitch detection is usually one of the fundamental steps in audio signal processing. However, it is common for pitch detectors to estimate a portion of the fundamental frequencies incorrectly, especially in real-time environments and when applied to singing. Therefore, the estimated pitch contour usually has errors. To remove these errors, a contour smoother algorithm should be employed. However, because none of the current contour-smoother algorithms has been explicitly designed to be applied to contours generated from singing, they are often unsuitable for this purpose. Therefore, this article aims to introduce a new smoother algorithm that rectifies this. The proposed smoother algorithm is compared with 15 other smoother algorithms over approximately 2700 pitch contours. Four metrics were used for the comparison. According to all the metrics, the proposed algorithm could smooth the contours more accurately than other algorithms. A distinct conclusion is that smoother algorithms should be designed according to the contour type and the result’s final applications. Full article
(This article belongs to the Special Issue Processing Techniques Applied to Audio, Image and Brain Signals)
Show Figures

Figure 1

22 pages, 2970 KiB  
Article
Automatic Clustering of Students by Level of Situational Interest Based on Their EEG Features
by Ernee Sazlinayati Othman, Ibrahima Faye and Aarij Mahmood Hussaan
Appl. Sci. 2022, 12(1), 389; https://0-doi-org.brum.beds.ac.uk/10.3390/app12010389 - 31 Dec 2021
Viewed by 1363
Abstract
The usage of physiological measures in detecting student’s interest is often said to improve the weakness of psychological measures by decreasing the susceptibility of subjective bias. The existing methods, especially EEG-based, use classification, which needs a predefined class and complex computational to analyze. [...] Read more.
The usage of physiological measures in detecting student’s interest is often said to improve the weakness of psychological measures by decreasing the susceptibility of subjective bias. The existing methods, especially EEG-based, use classification, which needs a predefined class and complex computational to analyze. However, the predefined classes are mostly based on subjective measurement (e.g., questionnaires). This work proposed a new scheme to automatically cluster the students by the level of situational interest (SI) during learning-based lessons on their electroencephalography (EEG) features. The formed clusters are then used as ground truth for classification purposes. A simultaneous recording of EEG was performed on 30 students while attending a lecture in a real classroom. The frontal mean delta and alpha power as well as the frontal alpha asymmetry metric served as the input for k-means and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithms. Using the collected data, 29 models were trained within nine domain classifiers, then the classifiers with the highest performance were selected. We validated all the models through 10-fold cross-validation. The high SI group was clustered to students having lower frontal mean delta and alpha power together with negative Frontal Alpha Asymmetry (FAA). It was found that k-means performed better by giving the maximum performance assessment parameters of 100% in clustering the students into three groups: high SI, medium SI and low SI. The findings show that the DBSCAN had reduced the performance to cluster dataset without the outlier. The findings of this study give a promising option to cluster the students by their SI level, as well as address the drawbacks of the existing methods, which use subjective measures. Full article
(This article belongs to the Special Issue Processing Techniques Applied to Audio, Image and Brain Signals)
Show Figures

Figure 1

12 pages, 904 KiB  
Article
Music with Concurrent Saliences of Musical Features Elicits Stronger Brain Responses
by Lorenzo J. Tardón, Ignacio Rodríguez-Rodríguez, Niels T. Haumann, Elvira Brattico and Isabel Barbancho
Appl. Sci. 2021, 11(19), 9158; https://0-doi-org.brum.beds.ac.uk/10.3390/app11199158 - 01 Oct 2021
Cited by 3 | Viewed by 1862
Abstract
Brain responses are often studied under strictly experimental conditions in which electroencephalograms (EEGs) are recorded to reflect reactions to short and repetitive stimuli. However, in real life, aural stimuli are continuously mixed and cannot be found isolated, such as when listening to music. [...] Read more.
Brain responses are often studied under strictly experimental conditions in which electroencephalograms (EEGs) are recorded to reflect reactions to short and repetitive stimuli. However, in real life, aural stimuli are continuously mixed and cannot be found isolated, such as when listening to music. In this audio context, the acoustic features in music related to brightness, loudness, noise, and spectral flux, among others, change continuously; thus, significant values of these features can occur nearly simultaneously. Such situations are expected to give rise to increased brain reaction with respect to a case in which they would appear in isolation. In order to assert this, EEG signals recorded while listening to a tango piece were considered. The focus was on the amplitude and time of the negative deflation (N100) and positive deflation (P200) after the stimuli, which was defined on the basis of the selected music feature saliences, in order to perform a statistical analysis intended to test the initial hypothesis. Differences in brain reactions can be identified depending on the concurrence (or not) of such significant values of different features, proving that coterminous increments in several qualities of music influence and modulate the strength of brain responses. Full article
(This article belongs to the Special Issue Processing Techniques Applied to Audio, Image and Brain Signals)
Show Figures

Figure 1

Back to TopTop